00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3686 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.754 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.777 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.790 Checking out Revision 16485855f227725e8e9566ee24d00b82aaeff0db (FETCH_HEAD) 00:00:05.790 > git config core.sparsecheckout # timeout=10 00:00:05.802 > git read-tree -mu HEAD # timeout=10 00:00:05.819 > git checkout -f 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=5 00:00:05.842 Commit message: "ansible/inventory: fix WFP37 mac address" 00:00:05.842 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.962 [Pipeline] Start of Pipeline 00:00:05.978 [Pipeline] library 00:00:05.980 Loading library shm_lib@f2beeebdc9d1f1c6c4d4791bb9c4c36bbeef976c 00:00:07.517 Library shm_lib@f2beeebdc9d1f1c6c4d4791bb9c4c36bbeef976c is cached. Copying from home. 00:00:07.551 [Pipeline] node 00:29:35.111 Still waiting to schedule task 00:29:35.111 Waiting for next available executor on ‘vagrant-vm-host’ 00:42:10.072 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:42:10.075 [Pipeline] { 00:42:10.089 [Pipeline] catchError 00:42:10.091 [Pipeline] { 00:42:10.108 [Pipeline] wrap 00:42:10.119 [Pipeline] { 00:42:10.129 [Pipeline] stage 00:42:10.131 [Pipeline] { (Prologue) 00:42:10.155 [Pipeline] echo 00:42:10.157 Node: VM-host-WFP1 00:42:10.164 [Pipeline] cleanWs 00:42:10.174 [WS-CLEANUP] Deleting project workspace... 00:42:10.174 [WS-CLEANUP] Deferred wipeout is used... 00:42:10.182 [WS-CLEANUP] done 00:42:10.380 [Pipeline] setCustomBuildProperty 00:42:10.469 [Pipeline] httpRequest 00:42:10.489 [Pipeline] echo 00:42:10.491 Sorcerer 10.211.164.101 is alive 00:42:10.500 [Pipeline] httpRequest 00:42:10.504 HttpMethod: GET 00:42:10.505 URL: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:42:10.506 Sending request to url: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:42:10.508 Response Code: HTTP/1.1 200 OK 00:42:10.509 Success: Status code 200 is in the accepted range: 200,404 00:42:10.510 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:42:10.654 [Pipeline] sh 00:42:10.940 + tar --no-same-owner -xf jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:42:10.959 [Pipeline] httpRequest 00:42:10.980 [Pipeline] echo 00:42:10.983 Sorcerer 10.211.164.101 is alive 00:42:10.993 [Pipeline] httpRequest 00:42:10.997 HttpMethod: GET 00:42:10.998 URL: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:42:10.998 Sending request to url: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:42:10.999 Response Code: HTTP/1.1 200 OK 00:42:11.000 Success: Status code 200 is in the accepted range: 200,404 00:42:11.000 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:42:13.301 [Pipeline] sh 00:42:13.577 + tar --no-same-owner -xf spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:42:16.164 [Pipeline] sh 00:42:16.439 + git -C spdk log --oneline -n5 00:42:16.439 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:42:16.439 89648519b bdev/compress: Output the pm_path entry for bdev_get_bdevs() 00:42:16.439 a1a2e2b48 nvme/pcie: add debug print for number of SGL/PRP entries 00:42:16.439 8b5c4be8b nvme/fio_plugin: add support for the disable_pcie_sgl_merge option 00:42:16.439 e431ba2e4 nvme/pcie: add disable_pcie_sgl_merge option 00:42:16.457 [Pipeline] withCredentials 00:42:16.466 > git --version # timeout=10 00:42:16.477 > git --version # 'git version 2.39.2' 00:42:16.489 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:42:16.491 [Pipeline] { 00:42:16.500 [Pipeline] retry 00:42:16.502 [Pipeline] { 00:42:16.519 [Pipeline] sh 00:42:16.794 + git ls-remote http://dpdk.org/git/dpdk main 00:42:17.385 [Pipeline] } 00:42:17.401 [Pipeline] // retry 00:42:17.406 [Pipeline] } 00:42:17.424 [Pipeline] // withCredentials 00:42:17.434 [Pipeline] httpRequest 00:42:17.445 [Pipeline] echo 00:42:17.446 Sorcerer 10.211.164.101 is alive 00:42:17.454 [Pipeline] httpRequest 00:42:17.458 HttpMethod: GET 00:42:17.458 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:42:17.458 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:42:17.459 Response Code: HTTP/1.1 200 OK 00:42:17.459 Success: Status code 200 is in the accepted range: 200,404 00:42:17.460 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:42:18.563 [Pipeline] sh 00:42:18.839 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:42:20.219 [Pipeline] sh 00:42:20.500 + git -C dpdk log --oneline -n5 00:42:20.500 fa8d2f7f28 version: 24.07-rc2 00:42:20.500 d4bc3c2e01 maintainers: update for cxgbe driver 00:42:20.500 2227c0ed9a maintainers: update for Microsoft drivers 00:42:20.500 8385370337 maintainers: update for Arm 00:42:20.500 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:42:20.521 [Pipeline] writeFile 00:42:20.538 [Pipeline] sh 00:42:20.827 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:42:20.848 [Pipeline] sh 00:42:21.121 + cat autorun-spdk.conf 00:42:21.121 SPDK_RUN_FUNCTIONAL_TEST=1 00:42:21.121 SPDK_TEST_NVMF=1 00:42:21.121 SPDK_TEST_NVMF_TRANSPORT=tcp 00:42:21.121 SPDK_TEST_USDT=1 00:42:21.121 SPDK_RUN_UBSAN=1 00:42:21.121 SPDK_TEST_NVMF_MDNS=1 00:42:21.121 NET_TYPE=virt 00:42:21.121 SPDK_JSONRPC_GO_CLIENT=1 00:42:21.121 SPDK_TEST_NATIVE_DPDK=main 00:42:21.121 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:42:21.121 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:42:21.127 RUN_NIGHTLY=1 00:42:21.146 [Pipeline] } 00:42:21.163 [Pipeline] // stage 00:42:21.177 [Pipeline] stage 00:42:21.180 [Pipeline] { (Run VM) 00:42:21.193 [Pipeline] sh 00:42:21.469 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:42:21.470 + echo 'Start stage prepare_nvme.sh' 00:42:21.470 Start stage prepare_nvme.sh 00:42:21.470 + [[ -n 1 ]] 00:42:21.470 + disk_prefix=ex1 00:42:21.470 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:42:21.470 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:42:21.470 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:42:21.470 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:42:21.470 ++ SPDK_TEST_NVMF=1 00:42:21.470 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:42:21.470 ++ SPDK_TEST_USDT=1 00:42:21.470 ++ SPDK_RUN_UBSAN=1 00:42:21.470 ++ SPDK_TEST_NVMF_MDNS=1 00:42:21.470 ++ NET_TYPE=virt 00:42:21.470 ++ SPDK_JSONRPC_GO_CLIENT=1 00:42:21.470 ++ SPDK_TEST_NATIVE_DPDK=main 00:42:21.470 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:42:21.470 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:42:21.470 ++ RUN_NIGHTLY=1 00:42:21.470 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:42:21.470 + nvme_files=() 00:42:21.470 + declare -A nvme_files 00:42:21.470 + backend_dir=/var/lib/libvirt/images/backends 00:42:21.470 + nvme_files['nvme.img']=5G 00:42:21.470 + nvme_files['nvme-cmb.img']=5G 00:42:21.470 + nvme_files['nvme-multi0.img']=4G 00:42:21.470 + nvme_files['nvme-multi1.img']=4G 00:42:21.470 + nvme_files['nvme-multi2.img']=4G 00:42:21.470 + nvme_files['nvme-openstack.img']=8G 00:42:21.470 + nvme_files['nvme-zns.img']=5G 00:42:21.470 + (( SPDK_TEST_NVME_PMR == 1 )) 00:42:21.470 + (( SPDK_TEST_FTL == 1 )) 00:42:21.470 + (( SPDK_TEST_NVME_FDP == 1 )) 00:42:21.470 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:42:21.470 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:42:21.470 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:42:21.470 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:42:21.470 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:42:21.470 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:42:21.470 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:42:21.470 + for nvme in "${!nvme_files[@]}" 00:42:21.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:42:21.727 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:42:21.727 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:42:21.727 + echo 'End stage prepare_nvme.sh' 00:42:21.727 End stage prepare_nvme.sh 00:42:21.738 [Pipeline] sh 00:42:22.017 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:42:22.017 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:42:22.017 00:42:22.017 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:42:22.017 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:42:22.017 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:42:22.017 HELP=0 00:42:22.017 DRY_RUN=0 00:42:22.017 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:42:22.017 NVME_DISKS_TYPE=nvme,nvme, 00:42:22.017 NVME_AUTO_CREATE=0 00:42:22.017 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:42:22.017 NVME_CMB=,, 00:42:22.017 NVME_PMR=,, 00:42:22.017 NVME_ZNS=,, 00:42:22.017 NVME_MS=,, 00:42:22.017 NVME_FDP=,, 00:42:22.017 SPDK_VAGRANT_DISTRO=fedora38 00:42:22.017 SPDK_VAGRANT_VMCPU=10 00:42:22.017 SPDK_VAGRANT_VMRAM=12288 00:42:22.017 SPDK_VAGRANT_PROVIDER=libvirt 00:42:22.017 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:42:22.017 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:42:22.017 SPDK_OPENSTACK_NETWORK=0 00:42:22.017 VAGRANT_PACKAGE_BOX=0 00:42:22.017 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:42:22.017 FORCE_DISTRO=true 00:42:22.017 VAGRANT_BOX_VERSION= 00:42:22.017 EXTRA_VAGRANTFILES= 00:42:22.017 NIC_MODEL=e1000 00:42:22.017 00:42:22.017 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:42:22.017 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:42:24.544 Bringing machine 'default' up with 'libvirt' provider... 00:42:25.914 ==> default: Creating image (snapshot of base box volume). 00:42:26.172 ==> default: Creating domain with the following settings... 00:42:26.172 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721644652_d34cc0c93cf34e4db8bb 00:42:26.172 ==> default: -- Domain type: kvm 00:42:26.172 ==> default: -- Cpus: 10 00:42:26.172 ==> default: -- Feature: acpi 00:42:26.172 ==> default: -- Feature: apic 00:42:26.172 ==> default: -- Feature: pae 00:42:26.172 ==> default: -- Memory: 12288M 00:42:26.172 ==> default: -- Memory Backing: hugepages: 00:42:26.172 ==> default: -- Management MAC: 00:42:26.172 ==> default: -- Loader: 00:42:26.172 ==> default: -- Nvram: 00:42:26.172 ==> default: -- Base box: spdk/fedora38 00:42:26.172 ==> default: -- Storage pool: default 00:42:26.172 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721644652_d34cc0c93cf34e4db8bb.img (20G) 00:42:26.172 ==> default: -- Volume Cache: default 00:42:26.172 ==> default: -- Kernel: 00:42:26.172 ==> default: -- Initrd: 00:42:26.172 ==> default: -- Graphics Type: vnc 00:42:26.172 ==> default: -- Graphics Port: -1 00:42:26.172 ==> default: -- Graphics IP: 127.0.0.1 00:42:26.172 ==> default: -- Graphics Password: Not defined 00:42:26.172 ==> default: -- Video Type: cirrus 00:42:26.172 ==> default: -- Video VRAM: 9216 00:42:26.172 ==> default: -- Sound Type: 00:42:26.172 ==> default: -- Keymap: en-us 00:42:26.172 ==> default: -- TPM Path: 00:42:26.172 ==> default: -- INPUT: type=mouse, bus=ps2 00:42:26.172 ==> default: -- Command line args: 00:42:26.172 ==> default: -> value=-device, 00:42:26.172 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:42:26.172 ==> default: -> value=-drive, 00:42:26.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:42:26.172 ==> default: -> value=-device, 00:42:26.172 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:42:26.172 ==> default: -> value=-device, 00:42:26.172 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:42:26.172 ==> default: -> value=-drive, 00:42:26.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:42:26.172 ==> default: -> value=-device, 00:42:26.172 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:42:26.172 ==> default: -> value=-drive, 00:42:26.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:42:26.172 ==> default: -> value=-device, 00:42:26.172 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:42:26.172 ==> default: -> value=-drive, 00:42:26.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:42:26.172 ==> default: -> value=-device, 00:42:26.172 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:42:26.431 ==> default: Creating shared folders metadata... 00:42:26.431 ==> default: Starting domain. 00:42:28.332 ==> default: Waiting for domain to get an IP address... 00:42:46.410 ==> default: Waiting for SSH to become available... 00:42:46.410 ==> default: Configuring and enabling network interfaces... 00:42:50.662 default: SSH address: 192.168.121.20:22 00:42:50.663 default: SSH username: vagrant 00:42:50.663 default: SSH auth method: private key 00:42:53.198 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:43:01.378 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:43:07.940 ==> default: Mounting SSHFS shared folder... 00:43:09.841 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:43:09.841 ==> default: Checking Mount.. 00:43:11.245 ==> default: Folder Successfully Mounted! 00:43:11.245 ==> default: Running provisioner: file... 00:43:12.618 default: ~/.gitconfig => .gitconfig 00:43:12.875 00:43:12.875 SUCCESS! 00:43:12.875 00:43:12.875 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:43:12.875 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:43:12.875 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:43:12.875 00:43:12.884 [Pipeline] } 00:43:12.905 [Pipeline] // stage 00:43:12.915 [Pipeline] dir 00:43:12.915 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:43:12.917 [Pipeline] { 00:43:12.932 [Pipeline] catchError 00:43:12.934 [Pipeline] { 00:43:12.950 [Pipeline] sh 00:43:13.231 + vagrant ssh-config --host vagrant 00:43:13.231 + sed -ne /^Host/,$p 00:43:13.231 + tee ssh_conf 00:43:16.516 Host vagrant 00:43:16.516 HostName 192.168.121.20 00:43:16.516 User vagrant 00:43:16.516 Port 22 00:43:16.516 UserKnownHostsFile /dev/null 00:43:16.516 StrictHostKeyChecking no 00:43:16.516 PasswordAuthentication no 00:43:16.516 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:43:16.516 IdentitiesOnly yes 00:43:16.516 LogLevel FATAL 00:43:16.516 ForwardAgent yes 00:43:16.516 ForwardX11 yes 00:43:16.516 00:43:16.529 [Pipeline] withEnv 00:43:16.532 [Pipeline] { 00:43:16.547 [Pipeline] sh 00:43:16.826 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:43:16.826 source /etc/os-release 00:43:16.826 [[ -e /image.version ]] && img=$(< /image.version) 00:43:16.826 # Minimal, systemd-like check. 00:43:16.826 if [[ -e /.dockerenv ]]; then 00:43:16.826 # Clear garbage from the node's name: 00:43:16.826 # agt-er_autotest_547-896 -> autotest_547-896 00:43:16.826 # $HOSTNAME is the actual container id 00:43:16.826 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:43:16.826 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:43:16.826 # We can assume this is a mount from a host where container is running, 00:43:16.826 # so fetch its hostname to easily identify the target swarm worker. 00:43:16.826 container="$(< /etc/hostname) ($agent)" 00:43:16.826 else 00:43:16.826 # Fallback 00:43:16.826 container=$agent 00:43:16.826 fi 00:43:16.826 fi 00:43:16.826 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:43:16.826 00:43:17.095 [Pipeline] } 00:43:17.118 [Pipeline] // withEnv 00:43:17.126 [Pipeline] setCustomBuildProperty 00:43:17.141 [Pipeline] stage 00:43:17.143 [Pipeline] { (Tests) 00:43:17.163 [Pipeline] sh 00:43:17.443 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:43:17.714 [Pipeline] sh 00:43:17.994 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:43:18.265 [Pipeline] timeout 00:43:18.265 Timeout set to expire in 40 min 00:43:18.267 [Pipeline] { 00:43:18.284 [Pipeline] sh 00:43:18.562 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:43:19.125 HEAD is now at 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:43:19.138 [Pipeline] sh 00:43:19.412 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:43:19.749 [Pipeline] sh 00:43:20.042 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:43:20.314 [Pipeline] sh 00:43:20.590 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:43:20.848 ++ readlink -f spdk_repo 00:43:20.848 + DIR_ROOT=/home/vagrant/spdk_repo 00:43:20.848 + [[ -n /home/vagrant/spdk_repo ]] 00:43:20.848 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:43:20.848 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:43:20.848 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:43:20.848 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:43:20.848 + [[ -d /home/vagrant/spdk_repo/output ]] 00:43:20.848 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:43:20.848 + cd /home/vagrant/spdk_repo 00:43:20.848 + source /etc/os-release 00:43:20.848 ++ NAME='Fedora Linux' 00:43:20.848 ++ VERSION='38 (Cloud Edition)' 00:43:20.848 ++ ID=fedora 00:43:20.848 ++ VERSION_ID=38 00:43:20.848 ++ VERSION_CODENAME= 00:43:20.848 ++ PLATFORM_ID=platform:f38 00:43:20.848 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:43:20.848 ++ ANSI_COLOR='0;38;2;60;110;180' 00:43:20.848 ++ LOGO=fedora-logo-icon 00:43:20.848 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:43:20.848 ++ HOME_URL=https://fedoraproject.org/ 00:43:20.848 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:43:20.848 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:43:20.848 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:43:20.848 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:43:20.848 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:43:20.848 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:43:20.848 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:43:20.848 ++ SUPPORT_END=2024-05-14 00:43:20.848 ++ VARIANT='Cloud Edition' 00:43:20.848 ++ VARIANT_ID=cloud 00:43:20.848 + uname -a 00:43:20.848 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:43:20.848 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:43:21.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:21.413 Hugepages 00:43:21.413 node hugesize free / total 00:43:21.413 node0 1048576kB 0 / 0 00:43:21.413 node0 2048kB 0 / 0 00:43:21.413 00:43:21.413 Type BDF Vendor Device NUMA Driver Device Block devices 00:43:21.413 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:43:21.413 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:43:21.413 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:43:21.413 + rm -f /tmp/spdk-ld-path 00:43:21.413 + source autorun-spdk.conf 00:43:21.413 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:43:21.413 ++ SPDK_TEST_NVMF=1 00:43:21.413 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:43:21.413 ++ SPDK_TEST_USDT=1 00:43:21.413 ++ SPDK_RUN_UBSAN=1 00:43:21.413 ++ SPDK_TEST_NVMF_MDNS=1 00:43:21.413 ++ NET_TYPE=virt 00:43:21.413 ++ SPDK_JSONRPC_GO_CLIENT=1 00:43:21.413 ++ SPDK_TEST_NATIVE_DPDK=main 00:43:21.413 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:43:21.413 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:43:21.414 ++ RUN_NIGHTLY=1 00:43:21.414 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:43:21.414 + [[ -n '' ]] 00:43:21.414 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:43:21.414 + for M in /var/spdk/build-*-manifest.txt 00:43:21.414 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:43:21.414 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:43:21.414 + for M in /var/spdk/build-*-manifest.txt 00:43:21.414 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:43:21.414 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:43:21.414 ++ uname 00:43:21.414 + [[ Linux == \L\i\n\u\x ]] 00:43:21.414 + sudo dmesg -T 00:43:21.674 + sudo dmesg --clear 00:43:21.674 + dmesg_pid=5837 00:43:21.674 + [[ Fedora Linux == FreeBSD ]] 00:43:21.674 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:43:21.674 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:43:21.674 + sudo dmesg -Tw 00:43:21.674 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:43:21.674 + [[ -x /usr/src/fio-static/fio ]] 00:43:21.674 + export FIO_BIN=/usr/src/fio-static/fio 00:43:21.674 + FIO_BIN=/usr/src/fio-static/fio 00:43:21.674 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:43:21.674 + [[ ! -v VFIO_QEMU_BIN ]] 00:43:21.674 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:43:21.674 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:43:21.674 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:43:21.674 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:43:21.674 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:43:21.674 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:43:21.674 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:43:21.674 Test configuration: 00:43:21.674 SPDK_RUN_FUNCTIONAL_TEST=1 00:43:21.674 SPDK_TEST_NVMF=1 00:43:21.674 SPDK_TEST_NVMF_TRANSPORT=tcp 00:43:21.674 SPDK_TEST_USDT=1 00:43:21.674 SPDK_RUN_UBSAN=1 00:43:21.674 SPDK_TEST_NVMF_MDNS=1 00:43:21.674 NET_TYPE=virt 00:43:21.674 SPDK_JSONRPC_GO_CLIENT=1 00:43:21.674 SPDK_TEST_NATIVE_DPDK=main 00:43:21.674 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:43:21.674 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:43:21.674 RUN_NIGHTLY=1 10:38:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:21.674 10:38:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:21.674 10:38:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.674 10:38:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.674 10:38:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.674 10:38:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.674 10:38:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.674 10:38:29 -- paths/export.sh@5 -- $ export PATH 00:43:21.674 10:38:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.674 10:38:29 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:43:21.674 10:38:29 -- common/autobuild_common.sh@447 -- $ date +%s 00:43:21.674 10:38:29 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721644709.XXXXXX 00:43:21.674 10:38:29 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721644709.U1mefh 00:43:21.674 10:38:29 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:43:21.674 10:38:29 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:43:21.674 10:38:29 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:43:21.674 10:38:29 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:43:21.674 10:38:29 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:43:21.674 10:38:29 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:43:21.674 10:38:29 -- common/autobuild_common.sh@463 -- $ get_config_params 00:43:21.674 10:38:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:43:21.674 10:38:29 -- common/autotest_common.sh@10 -- $ set +x 00:43:21.674 10:38:29 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:43:21.674 10:38:29 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:43:21.674 10:38:29 -- pm/common@17 -- $ local monitor 00:43:21.674 10:38:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:21.674 10:38:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:21.674 10:38:29 -- pm/common@25 -- $ sleep 1 00:43:21.674 10:38:29 -- pm/common@21 -- $ date +%s 00:43:21.933 10:38:29 -- pm/common@21 -- $ date +%s 00:43:21.933 10:38:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721644709 00:43:21.933 10:38:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721644709 00:43:21.933 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721644709_collect-vmstat.pm.log 00:43:21.933 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721644709_collect-cpu-load.pm.log 00:43:22.867 10:38:30 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:43:22.867 10:38:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:43:22.867 10:38:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:43:22.867 10:38:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:43:22.867 10:38:30 -- spdk/autobuild.sh@16 -- $ date -u 00:43:22.867 Mon Jul 22 10:38:30 AM UTC 2024 00:43:22.867 10:38:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:43:22.867 v24.09-pre-259-g8fb860b73 00:43:22.867 10:38:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:43:22.867 10:38:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:43:22.867 10:38:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:43:22.867 10:38:30 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:43:22.867 10:38:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:43:22.867 10:38:30 -- common/autotest_common.sh@10 -- $ set +x 00:43:22.867 ************************************ 00:43:22.867 START TEST ubsan 00:43:22.867 ************************************ 00:43:22.867 using ubsan 00:43:22.867 10:38:30 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:43:22.867 00:43:22.867 real 0m0.001s 00:43:22.867 user 0m0.001s 00:43:22.867 sys 0m0.000s 00:43:22.867 10:38:30 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:43:22.867 10:38:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:43:22.867 ************************************ 00:43:22.867 END TEST ubsan 00:43:22.867 ************************************ 00:43:22.867 10:38:30 -- common/autotest_common.sh@1142 -- $ return 0 00:43:22.867 10:38:30 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:43:22.867 10:38:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:43:22.867 10:38:30 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:43:22.867 10:38:30 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:43:22.867 10:38:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:43:22.867 10:38:30 -- common/autotest_common.sh@10 -- $ set +x 00:43:22.867 ************************************ 00:43:22.867 START TEST build_native_dpdk 00:43:22.867 ************************************ 00:43:22.867 10:38:30 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:43:22.867 fa8d2f7f28 version: 24.07-rc2 00:43:22.867 d4bc3c2e01 maintainers: update for cxgbe driver 00:43:22.867 2227c0ed9a maintainers: update for Microsoft drivers 00:43:22.867 8385370337 maintainers: update for Arm 00:43:22.867 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:43:22.867 10:38:30 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:43:22.867 10:38:30 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:43:23.126 patching file config/rte_config.h 00:43:23.126 Hunk #1 succeeded at 70 (offset 11 lines). 00:43:23.126 10:38:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc2 24.07.0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 24.07.0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc2 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc2 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc2 =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^0x ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^[a-f0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:43:23.126 10:38:30 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:43:23.126 10:38:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:43:23.126 10:38:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:43:23.126 10:38:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:43:23.126 10:38:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:43:23.127 10:38:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:43:28.394 The Meson build system 00:43:28.394 Version: 1.3.1 00:43:28.394 Source dir: /home/vagrant/spdk_repo/dpdk 00:43:28.394 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:43:28.394 Build type: native build 00:43:28.394 Program cat found: YES (/usr/bin/cat) 00:43:28.394 Project name: DPDK 00:43:28.394 Project version: 24.07.0-rc2 00:43:28.394 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:43:28.394 C linker for the host machine: gcc ld.bfd 2.39-16 00:43:28.394 Host machine cpu family: x86_64 00:43:28.394 Host machine cpu: x86_64 00:43:28.394 Message: ## Building in Developer Mode ## 00:43:28.394 Program pkg-config found: YES (/usr/bin/pkg-config) 00:43:28.394 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:43:28.394 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:43:28.394 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:43:28.394 Program cat found: YES (/usr/bin/cat) 00:43:28.394 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:43:28.394 Compiler for C supports arguments -march=native: YES 00:43:28.394 Checking for size of "void *" : 8 00:43:28.394 Checking for size of "void *" : 8 (cached) 00:43:28.394 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:43:28.394 Library m found: YES 00:43:28.394 Library numa found: YES 00:43:28.394 Has header "numaif.h" : YES 00:43:28.394 Library fdt found: NO 00:43:28.394 Library execinfo found: NO 00:43:28.394 Has header "execinfo.h" : YES 00:43:28.394 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:43:28.394 Run-time dependency libarchive found: NO (tried pkgconfig) 00:43:28.394 Run-time dependency libbsd found: NO (tried pkgconfig) 00:43:28.394 Run-time dependency jansson found: NO (tried pkgconfig) 00:43:28.394 Run-time dependency openssl found: YES 3.0.9 00:43:28.394 Run-time dependency libpcap found: YES 1.10.4 00:43:28.394 Has header "pcap.h" with dependency libpcap: YES 00:43:28.394 Compiler for C supports arguments -Wcast-qual: YES 00:43:28.394 Compiler for C supports arguments -Wdeprecated: YES 00:43:28.394 Compiler for C supports arguments -Wformat: YES 00:43:28.394 Compiler for C supports arguments -Wformat-nonliteral: NO 00:43:28.394 Compiler for C supports arguments -Wformat-security: NO 00:43:28.394 Compiler for C supports arguments -Wmissing-declarations: YES 00:43:28.394 Compiler for C supports arguments -Wmissing-prototypes: YES 00:43:28.394 Compiler for C supports arguments -Wnested-externs: YES 00:43:28.394 Compiler for C supports arguments -Wold-style-definition: YES 00:43:28.394 Compiler for C supports arguments -Wpointer-arith: YES 00:43:28.394 Compiler for C supports arguments -Wsign-compare: YES 00:43:28.394 Compiler for C supports arguments -Wstrict-prototypes: YES 00:43:28.394 Compiler for C supports arguments -Wundef: YES 00:43:28.394 Compiler for C supports arguments -Wwrite-strings: YES 00:43:28.394 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:43:28.394 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:43:28.394 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:43:28.394 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:43:28.394 Program objdump found: YES (/usr/bin/objdump) 00:43:28.394 Compiler for C supports arguments -mavx512f: YES 00:43:28.394 Checking if "AVX512 checking" compiles: YES 00:43:28.394 Fetching value of define "__SSE4_2__" : 1 00:43:28.394 Fetching value of define "__AES__" : 1 00:43:28.394 Fetching value of define "__AVX__" : 1 00:43:28.394 Fetching value of define "__AVX2__" : 1 00:43:28.394 Fetching value of define "__AVX512BW__" : 1 00:43:28.394 Fetching value of define "__AVX512CD__" : 1 00:43:28.394 Fetching value of define "__AVX512DQ__" : 1 00:43:28.394 Fetching value of define "__AVX512F__" : 1 00:43:28.395 Fetching value of define "__AVX512VL__" : 1 00:43:28.395 Fetching value of define "__PCLMUL__" : 1 00:43:28.395 Fetching value of define "__RDRND__" : 1 00:43:28.395 Fetching value of define "__RDSEED__" : 1 00:43:28.395 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:43:28.395 Compiler for C supports arguments -Wno-format-truncation: YES 00:43:28.395 Message: lib/log: Defining dependency "log" 00:43:28.395 Message: lib/kvargs: Defining dependency "kvargs" 00:43:28.395 Message: lib/argparse: Defining dependency "argparse" 00:43:28.395 Message: lib/telemetry: Defining dependency "telemetry" 00:43:28.395 Checking for function "getentropy" : NO 00:43:28.395 Message: lib/eal: Defining dependency "eal" 00:43:28.395 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:43:28.395 Message: lib/ring: Defining dependency "ring" 00:43:28.395 Message: lib/rcu: Defining dependency "rcu" 00:43:28.395 Message: lib/mempool: Defining dependency "mempool" 00:43:28.395 Message: lib/mbuf: Defining dependency "mbuf" 00:43:28.395 Fetching value of define "__PCLMUL__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512F__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512BW__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512VL__" : 1 (cached) 00:43:28.395 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:43:28.395 Compiler for C supports arguments -mpclmul: YES 00:43:28.395 Compiler for C supports arguments -maes: YES 00:43:28.395 Compiler for C supports arguments -mavx512f: YES (cached) 00:43:28.395 Compiler for C supports arguments -mavx512bw: YES 00:43:28.395 Compiler for C supports arguments -mavx512dq: YES 00:43:28.395 Compiler for C supports arguments -mavx512vl: YES 00:43:28.395 Compiler for C supports arguments -mvpclmulqdq: YES 00:43:28.395 Compiler for C supports arguments -mavx2: YES 00:43:28.395 Compiler for C supports arguments -mavx: YES 00:43:28.395 Message: lib/net: Defining dependency "net" 00:43:28.395 Message: lib/meter: Defining dependency "meter" 00:43:28.395 Message: lib/ethdev: Defining dependency "ethdev" 00:43:28.395 Message: lib/pci: Defining dependency "pci" 00:43:28.395 Message: lib/cmdline: Defining dependency "cmdline" 00:43:28.395 Message: lib/metrics: Defining dependency "metrics" 00:43:28.395 Message: lib/hash: Defining dependency "hash" 00:43:28.395 Message: lib/timer: Defining dependency "timer" 00:43:28.395 Fetching value of define "__AVX512F__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512VL__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512CD__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512BW__" : 1 (cached) 00:43:28.395 Message: lib/acl: Defining dependency "acl" 00:43:28.395 Message: lib/bbdev: Defining dependency "bbdev" 00:43:28.395 Message: lib/bitratestats: Defining dependency "bitratestats" 00:43:28.395 Run-time dependency libelf found: YES 0.190 00:43:28.395 Message: lib/bpf: Defining dependency "bpf" 00:43:28.395 Message: lib/cfgfile: Defining dependency "cfgfile" 00:43:28.395 Message: lib/compressdev: Defining dependency "compressdev" 00:43:28.395 Message: lib/cryptodev: Defining dependency "cryptodev" 00:43:28.395 Message: lib/distributor: Defining dependency "distributor" 00:43:28.395 Message: lib/dmadev: Defining dependency "dmadev" 00:43:28.395 Message: lib/efd: Defining dependency "efd" 00:43:28.395 Message: lib/eventdev: Defining dependency "eventdev" 00:43:28.395 Message: lib/dispatcher: Defining dependency "dispatcher" 00:43:28.395 Message: lib/gpudev: Defining dependency "gpudev" 00:43:28.395 Message: lib/gro: Defining dependency "gro" 00:43:28.395 Message: lib/gso: Defining dependency "gso" 00:43:28.395 Message: lib/ip_frag: Defining dependency "ip_frag" 00:43:28.395 Message: lib/jobstats: Defining dependency "jobstats" 00:43:28.395 Message: lib/latencystats: Defining dependency "latencystats" 00:43:28.395 Message: lib/lpm: Defining dependency "lpm" 00:43:28.395 Fetching value of define "__AVX512F__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512IFMA__" : (undefined) 00:43:28.395 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:43:28.395 Message: lib/member: Defining dependency "member" 00:43:28.395 Message: lib/pcapng: Defining dependency "pcapng" 00:43:28.395 Compiler for C supports arguments -Wno-cast-qual: YES 00:43:28.395 Message: lib/power: Defining dependency "power" 00:43:28.395 Message: lib/rawdev: Defining dependency "rawdev" 00:43:28.395 Message: lib/regexdev: Defining dependency "regexdev" 00:43:28.395 Message: lib/mldev: Defining dependency "mldev" 00:43:28.395 Message: lib/rib: Defining dependency "rib" 00:43:28.395 Message: lib/reorder: Defining dependency "reorder" 00:43:28.395 Message: lib/sched: Defining dependency "sched" 00:43:28.395 Message: lib/security: Defining dependency "security" 00:43:28.395 Message: lib/stack: Defining dependency "stack" 00:43:28.395 Has header "linux/userfaultfd.h" : YES 00:43:28.395 Has header "linux/vduse.h" : YES 00:43:28.395 Message: lib/vhost: Defining dependency "vhost" 00:43:28.395 Message: lib/ipsec: Defining dependency "ipsec" 00:43:28.395 Message: lib/pdcp: Defining dependency "pdcp" 00:43:28.395 Fetching value of define "__AVX512F__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:43:28.395 Fetching value of define "__AVX512BW__" : 1 (cached) 00:43:28.395 Message: lib/fib: Defining dependency "fib" 00:43:28.395 Message: lib/port: Defining dependency "port" 00:43:28.395 Message: lib/pdump: Defining dependency "pdump" 00:43:28.395 Message: lib/table: Defining dependency "table" 00:43:28.395 Message: lib/pipeline: Defining dependency "pipeline" 00:43:28.395 Message: lib/graph: Defining dependency "graph" 00:43:28.395 Message: lib/node: Defining dependency "node" 00:43:28.395 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:43:28.395 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:43:28.395 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:43:29.770 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:43:29.770 Compiler for C supports arguments -Wno-sign-compare: YES 00:43:29.770 Compiler for C supports arguments -Wno-unused-value: YES 00:43:29.770 Compiler for C supports arguments -Wno-format: YES 00:43:29.770 Compiler for C supports arguments -Wno-format-security: YES 00:43:29.770 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:43:29.770 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:43:29.770 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:43:29.770 Compiler for C supports arguments -Wno-unused-parameter: YES 00:43:29.770 Fetching value of define "__AVX512F__" : 1 (cached) 00:43:29.770 Fetching value of define "__AVX512BW__" : 1 (cached) 00:43:29.770 Compiler for C supports arguments -mavx512f: YES (cached) 00:43:29.770 Compiler for C supports arguments -mavx512bw: YES (cached) 00:43:29.770 Compiler for C supports arguments -march=skylake-avx512: YES 00:43:29.770 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:43:29.770 Has header "sys/epoll.h" : YES 00:43:29.770 Program doxygen found: YES (/usr/bin/doxygen) 00:43:29.770 Configuring doxy-api-html.conf using configuration 00:43:29.770 Configuring doxy-api-man.conf using configuration 00:43:29.770 Program mandb found: YES (/usr/bin/mandb) 00:43:29.770 Program sphinx-build found: NO 00:43:29.770 Configuring rte_build_config.h using configuration 00:43:29.770 Message: 00:43:29.770 ================= 00:43:29.770 Applications Enabled 00:43:29.770 ================= 00:43:29.770 00:43:29.770 apps: 00:43:29.770 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:43:29.770 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:43:29.770 test-pmd, test-regex, test-sad, test-security-perf, 00:43:29.770 00:43:29.770 Message: 00:43:29.770 ================= 00:43:29.770 Libraries Enabled 00:43:29.770 ================= 00:43:29.770 00:43:29.770 libs: 00:43:29.770 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:43:29.770 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:43:29.770 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:43:29.770 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:43:29.770 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:43:29.770 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:43:29.770 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:43:29.770 graph, node, 00:43:29.770 00:43:29.770 Message: 00:43:29.770 =============== 00:43:29.770 Drivers Enabled 00:43:29.770 =============== 00:43:29.770 00:43:29.770 common: 00:43:29.770 00:43:29.770 bus: 00:43:29.770 pci, vdev, 00:43:29.770 mempool: 00:43:29.770 ring, 00:43:29.770 dma: 00:43:29.770 00:43:29.770 net: 00:43:29.770 i40e, 00:43:29.770 raw: 00:43:29.770 00:43:29.770 crypto: 00:43:29.770 00:43:29.770 compress: 00:43:29.770 00:43:29.770 regex: 00:43:29.770 00:43:29.770 ml: 00:43:29.770 00:43:29.770 vdpa: 00:43:29.770 00:43:29.770 event: 00:43:29.770 00:43:29.770 baseband: 00:43:29.770 00:43:29.770 gpu: 00:43:29.770 00:43:29.770 00:43:29.770 Message: 00:43:29.770 ================= 00:43:29.770 Content Skipped 00:43:29.770 ================= 00:43:29.770 00:43:29.770 apps: 00:43:29.770 00:43:29.770 libs: 00:43:29.770 00:43:29.770 drivers: 00:43:29.770 common/cpt: not in enabled drivers build config 00:43:29.770 common/dpaax: not in enabled drivers build config 00:43:29.770 common/iavf: not in enabled drivers build config 00:43:29.770 common/idpf: not in enabled drivers build config 00:43:29.770 common/ionic: not in enabled drivers build config 00:43:29.770 common/mvep: not in enabled drivers build config 00:43:29.770 common/octeontx: not in enabled drivers build config 00:43:29.770 bus/auxiliary: not in enabled drivers build config 00:43:29.770 bus/cdx: not in enabled drivers build config 00:43:29.770 bus/dpaa: not in enabled drivers build config 00:43:29.770 bus/fslmc: not in enabled drivers build config 00:43:29.770 bus/ifpga: not in enabled drivers build config 00:43:29.770 bus/platform: not in enabled drivers build config 00:43:29.770 bus/uacce: not in enabled drivers build config 00:43:29.770 bus/vmbus: not in enabled drivers build config 00:43:29.770 common/cnxk: not in enabled drivers build config 00:43:29.770 common/mlx5: not in enabled drivers build config 00:43:29.770 common/nfp: not in enabled drivers build config 00:43:29.770 common/nitrox: not in enabled drivers build config 00:43:29.770 common/qat: not in enabled drivers build config 00:43:29.770 common/sfc_efx: not in enabled drivers build config 00:43:29.770 mempool/bucket: not in enabled drivers build config 00:43:29.770 mempool/cnxk: not in enabled drivers build config 00:43:29.770 mempool/dpaa: not in enabled drivers build config 00:43:29.770 mempool/dpaa2: not in enabled drivers build config 00:43:29.770 mempool/octeontx: not in enabled drivers build config 00:43:29.770 mempool/stack: not in enabled drivers build config 00:43:29.770 dma/cnxk: not in enabled drivers build config 00:43:29.770 dma/dpaa: not in enabled drivers build config 00:43:29.770 dma/dpaa2: not in enabled drivers build config 00:43:29.770 dma/hisilicon: not in enabled drivers build config 00:43:29.770 dma/idxd: not in enabled drivers build config 00:43:29.770 dma/ioat: not in enabled drivers build config 00:43:29.770 dma/odm: not in enabled drivers build config 00:43:29.770 dma/skeleton: not in enabled drivers build config 00:43:29.770 net/af_packet: not in enabled drivers build config 00:43:29.770 net/af_xdp: not in enabled drivers build config 00:43:29.770 net/ark: not in enabled drivers build config 00:43:29.770 net/atlantic: not in enabled drivers build config 00:43:29.771 net/avp: not in enabled drivers build config 00:43:29.771 net/axgbe: not in enabled drivers build config 00:43:29.771 net/bnx2x: not in enabled drivers build config 00:43:29.771 net/bnxt: not in enabled drivers build config 00:43:29.771 net/bonding: not in enabled drivers build config 00:43:29.771 net/cnxk: not in enabled drivers build config 00:43:29.771 net/cpfl: not in enabled drivers build config 00:43:29.771 net/cxgbe: not in enabled drivers build config 00:43:29.771 net/dpaa: not in enabled drivers build config 00:43:29.771 net/dpaa2: not in enabled drivers build config 00:43:29.771 net/e1000: not in enabled drivers build config 00:43:29.771 net/ena: not in enabled drivers build config 00:43:29.771 net/enetc: not in enabled drivers build config 00:43:29.771 net/enetfec: not in enabled drivers build config 00:43:29.771 net/enic: not in enabled drivers build config 00:43:29.771 net/failsafe: not in enabled drivers build config 00:43:29.771 net/fm10k: not in enabled drivers build config 00:43:29.771 net/gve: not in enabled drivers build config 00:43:29.771 net/hinic: not in enabled drivers build config 00:43:29.771 net/hns3: not in enabled drivers build config 00:43:29.771 net/iavf: not in enabled drivers build config 00:43:29.771 net/ice: not in enabled drivers build config 00:43:29.771 net/idpf: not in enabled drivers build config 00:43:29.771 net/igc: not in enabled drivers build config 00:43:29.771 net/ionic: not in enabled drivers build config 00:43:29.771 net/ipn3ke: not in enabled drivers build config 00:43:29.771 net/ixgbe: not in enabled drivers build config 00:43:29.771 net/mana: not in enabled drivers build config 00:43:29.771 net/memif: not in enabled drivers build config 00:43:29.771 net/mlx4: not in enabled drivers build config 00:43:29.771 net/mlx5: not in enabled drivers build config 00:43:29.771 net/mvneta: not in enabled drivers build config 00:43:29.771 net/mvpp2: not in enabled drivers build config 00:43:29.771 net/netvsc: not in enabled drivers build config 00:43:29.771 net/nfb: not in enabled drivers build config 00:43:29.771 net/nfp: not in enabled drivers build config 00:43:29.771 net/ngbe: not in enabled drivers build config 00:43:29.771 net/null: not in enabled drivers build config 00:43:29.771 net/octeontx: not in enabled drivers build config 00:43:29.771 net/octeon_ep: not in enabled drivers build config 00:43:29.771 net/pcap: not in enabled drivers build config 00:43:29.771 net/pfe: not in enabled drivers build config 00:43:29.771 net/qede: not in enabled drivers build config 00:43:29.771 net/ring: not in enabled drivers build config 00:43:29.771 net/sfc: not in enabled drivers build config 00:43:29.771 net/softnic: not in enabled drivers build config 00:43:29.771 net/tap: not in enabled drivers build config 00:43:29.771 net/thunderx: not in enabled drivers build config 00:43:29.771 net/txgbe: not in enabled drivers build config 00:43:29.771 net/vdev_netvsc: not in enabled drivers build config 00:43:29.771 net/vhost: not in enabled drivers build config 00:43:29.771 net/virtio: not in enabled drivers build config 00:43:29.771 net/vmxnet3: not in enabled drivers build config 00:43:29.771 raw/cnxk_bphy: not in enabled drivers build config 00:43:29.771 raw/cnxk_gpio: not in enabled drivers build config 00:43:29.771 raw/dpaa2_cmdif: not in enabled drivers build config 00:43:29.771 raw/ifpga: not in enabled drivers build config 00:43:29.771 raw/ntb: not in enabled drivers build config 00:43:29.771 raw/skeleton: not in enabled drivers build config 00:43:29.771 crypto/armv8: not in enabled drivers build config 00:43:29.771 crypto/bcmfs: not in enabled drivers build config 00:43:29.771 crypto/caam_jr: not in enabled drivers build config 00:43:29.771 crypto/ccp: not in enabled drivers build config 00:43:29.771 crypto/cnxk: not in enabled drivers build config 00:43:29.771 crypto/dpaa_sec: not in enabled drivers build config 00:43:29.771 crypto/dpaa2_sec: not in enabled drivers build config 00:43:29.771 crypto/ionic: not in enabled drivers build config 00:43:29.771 crypto/ipsec_mb: not in enabled drivers build config 00:43:29.771 crypto/mlx5: not in enabled drivers build config 00:43:29.771 crypto/mvsam: not in enabled drivers build config 00:43:29.771 crypto/nitrox: not in enabled drivers build config 00:43:29.771 crypto/null: not in enabled drivers build config 00:43:29.771 crypto/octeontx: not in enabled drivers build config 00:43:29.771 crypto/openssl: not in enabled drivers build config 00:43:29.771 crypto/scheduler: not in enabled drivers build config 00:43:29.771 crypto/uadk: not in enabled drivers build config 00:43:29.771 crypto/virtio: not in enabled drivers build config 00:43:29.771 compress/isal: not in enabled drivers build config 00:43:29.771 compress/mlx5: not in enabled drivers build config 00:43:29.771 compress/nitrox: not in enabled drivers build config 00:43:29.771 compress/octeontx: not in enabled drivers build config 00:43:29.771 compress/uadk: not in enabled drivers build config 00:43:29.771 compress/zlib: not in enabled drivers build config 00:43:29.771 regex/mlx5: not in enabled drivers build config 00:43:29.771 regex/cn9k: not in enabled drivers build config 00:43:29.771 ml/cnxk: not in enabled drivers build config 00:43:29.771 vdpa/ifc: not in enabled drivers build config 00:43:29.771 vdpa/mlx5: not in enabled drivers build config 00:43:29.771 vdpa/nfp: not in enabled drivers build config 00:43:29.771 vdpa/sfc: not in enabled drivers build config 00:43:29.771 event/cnxk: not in enabled drivers build config 00:43:29.771 event/dlb2: not in enabled drivers build config 00:43:29.771 event/dpaa: not in enabled drivers build config 00:43:29.771 event/dpaa2: not in enabled drivers build config 00:43:29.771 event/dsw: not in enabled drivers build config 00:43:29.771 event/opdl: not in enabled drivers build config 00:43:29.771 event/skeleton: not in enabled drivers build config 00:43:29.771 event/sw: not in enabled drivers build config 00:43:29.771 event/octeontx: not in enabled drivers build config 00:43:29.771 baseband/acc: not in enabled drivers build config 00:43:29.771 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:43:29.771 baseband/fpga_lte_fec: not in enabled drivers build config 00:43:29.771 baseband/la12xx: not in enabled drivers build config 00:43:29.771 baseband/null: not in enabled drivers build config 00:43:29.771 baseband/turbo_sw: not in enabled drivers build config 00:43:29.771 gpu/cuda: not in enabled drivers build config 00:43:29.771 00:43:29.771 00:43:29.771 Build targets in project: 221 00:43:29.771 00:43:29.771 DPDK 24.07.0-rc2 00:43:29.771 00:43:29.771 User defined options 00:43:29.771 libdir : lib 00:43:29.771 prefix : /home/vagrant/spdk_repo/dpdk/build 00:43:29.771 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:43:29.771 c_link_args : 00:43:29.771 enable_docs : false 00:43:29.771 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:43:29.771 enable_kmods : false 00:43:29.771 machine : native 00:43:29.771 tests : false 00:43:29.771 00:43:29.771 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:43:29.771 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:43:29.771 10:38:37 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:43:29.771 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:43:30.031 [1/720] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:43:30.031 [2/720] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:43:30.031 [3/720] Linking static target lib/librte_kvargs.a 00:43:30.031 [4/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:43:30.031 [5/720] Compiling C object lib/librte_log.a.p/log_log.c.o 00:43:30.031 [6/720] Linking static target lib/librte_log.a 00:43:30.031 [7/720] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:43:30.031 [8/720] Linking static target lib/librte_argparse.a 00:43:30.291 [9/720] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:43:30.291 [10/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:43:30.291 [11/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:43:30.291 [12/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:43:30.291 [13/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:43:30.291 [14/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:43:30.291 [15/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:43:30.291 [16/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:43:30.291 [17/720] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:43:30.552 [18/720] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:43:30.552 [19/720] Linking target lib/librte_log.so.24.2 00:43:30.552 [20/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:43:30.552 [21/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:43:30.552 [22/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:43:30.552 [23/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:43:30.552 [24/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:43:30.812 [25/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:43:30.812 [26/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:43:30.812 [27/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:43:30.812 [28/720] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:43:30.812 [29/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:43:30.812 [30/720] Linking target lib/librte_kvargs.so.24.2 00:43:30.812 [31/720] Linking target lib/librte_argparse.so.24.2 00:43:30.812 [32/720] Linking static target lib/librte_telemetry.a 00:43:30.812 [33/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:43:30.812 [34/720] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:43:30.812 [35/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:43:31.072 [36/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:43:31.072 [37/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:43:31.072 [38/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:43:31.072 [39/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:43:31.072 [40/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:43:31.072 [41/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:43:31.072 [42/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:43:31.072 [43/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:43:31.072 [44/720] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:43:31.072 [45/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:43:31.072 [46/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:43:31.072 [47/720] Linking target lib/librte_telemetry.so.24.2 00:43:31.401 [48/720] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:43:31.401 [49/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:43:31.401 [50/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:43:31.401 [51/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:43:31.690 [52/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:43:31.690 [53/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:43:31.690 [54/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:43:31.690 [55/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:43:31.690 [56/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:43:31.690 [57/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:43:31.690 [58/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:43:31.690 [59/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:43:31.690 [60/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:43:31.951 [61/720] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:43:31.951 [62/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:43:31.951 [63/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:43:31.951 [64/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:43:31.951 [65/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:43:31.951 [66/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:43:31.951 [67/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:43:31.951 [68/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:43:31.951 [69/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:43:31.951 [70/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:43:32.210 [71/720] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:43:32.210 [72/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:43:32.210 [73/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:43:32.210 [74/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:43:32.210 [75/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:43:32.470 [76/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:43:32.470 [77/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:43:32.470 [78/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:43:32.470 [79/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:43:32.470 [80/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:43:32.470 [81/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:43:32.470 [82/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:43:32.730 [83/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:43:32.730 [84/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:43:32.730 [85/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:43:32.730 [86/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:43:32.730 [87/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:43:32.730 [88/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:43:32.730 [89/720] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:43:32.730 [90/720] Linking static target lib/librte_ring.a 00:43:32.989 [91/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:43:32.989 [92/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:43:32.989 [93/720] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:43:32.989 [94/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:43:32.989 [95/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:43:32.989 [96/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:43:32.989 [97/720] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:43:32.989 [98/720] Linking static target lib/librte_eal.a 00:43:33.248 [99/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:43:33.248 [100/720] Linking static target lib/librte_mempool.a 00:43:33.507 [101/720] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:43:33.507 [102/720] Linking static target lib/librte_rcu.a 00:43:33.507 [103/720] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:43:33.507 [104/720] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:43:33.507 [105/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:43:33.507 [106/720] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:43:33.507 [107/720] Linking static target lib/net/libnet_crc_avx512_lib.a 00:43:33.507 [108/720] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:43:33.507 [109/720] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:43:33.507 [110/720] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:43:33.766 [111/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:43:33.766 [112/720] Linking static target lib/librte_mbuf.a 00:43:33.766 [113/720] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:43:33.766 [114/720] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:43:33.766 [115/720] Linking static target lib/librte_net.a 00:43:33.766 [116/720] Linking static target lib/librte_meter.a 00:43:33.766 [117/720] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:43:33.766 [118/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:43:34.025 [119/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:43:34.025 [120/720] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:43:34.025 [121/720] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:43:34.025 [122/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:43:34.025 [123/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:43:34.284 [124/720] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:43:34.284 [125/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:43:34.543 [126/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:43:34.802 [127/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:43:34.802 [128/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:43:34.802 [129/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:43:34.802 [130/720] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:43:34.802 [131/720] Linking static target lib/librte_pci.a 00:43:34.802 [132/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:43:34.802 [133/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:43:34.802 [134/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:43:34.802 [135/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:43:34.802 [136/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:43:35.060 [137/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:43:35.060 [138/720] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:43:35.060 [139/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:43:35.060 [140/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:43:35.060 [141/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:43:35.060 [142/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:43:35.060 [143/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:43:35.060 [144/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:43:35.060 [145/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:43:35.060 [146/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:43:35.319 [147/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:43:35.319 [148/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:43:35.319 [149/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:43:35.319 [150/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:43:35.319 [151/720] Linking static target lib/librte_cmdline.a 00:43:35.576 [152/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:43:35.576 [153/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:43:35.576 [154/720] Linking static target lib/librte_metrics.a 00:43:35.576 [155/720] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:43:35.576 [156/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:43:35.576 [157/720] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:43:35.834 [158/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:43:35.834 [159/720] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:43:36.091 [160/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:43:36.091 [161/720] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:43:36.091 [162/720] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:43:36.091 [163/720] Linking static target lib/librte_timer.a 00:43:36.349 [164/720] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:43:36.349 [165/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:43:36.349 [166/720] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:43:36.349 [167/720] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:43:36.606 [168/720] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:43:36.864 [169/720] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:43:36.864 [170/720] Linking static target lib/librte_bitratestats.a 00:43:36.864 [171/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:43:36.864 [172/720] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:43:36.864 [173/720] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:43:37.122 [174/720] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:43:37.122 [175/720] Linking static target lib/librte_bbdev.a 00:43:37.122 [176/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:43:37.381 [177/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:43:37.381 [178/720] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:43:37.381 [179/720] Linking static target lib/librte_hash.a 00:43:37.381 [180/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:43:37.381 [181/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:43:37.639 [182/720] Linking static target lib/librte_ethdev.a 00:43:37.639 [183/720] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:37.639 [184/720] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:43:37.639 [185/720] Linking static target lib/acl/libavx2_tmp.a 00:43:37.639 [186/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:43:37.639 [187/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:43:37.897 [188/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:43:37.897 [189/720] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:43:37.897 [190/720] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:43:37.897 [191/720] Linking static target lib/librte_cfgfile.a 00:43:38.156 [192/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:43:38.156 [193/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:43:38.414 [194/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:43:38.414 [195/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:43:38.414 [196/720] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:43:38.414 [197/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:43:38.414 [198/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:43:38.414 [199/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:43:38.414 [200/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:43:38.414 [201/720] Linking static target lib/librte_compressdev.a 00:43:38.414 [202/720] Linking static target lib/librte_bpf.a 00:43:38.673 [203/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:43:38.673 [204/720] Linking static target lib/librte_acl.a 00:43:38.673 [205/720] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:43:38.932 [206/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:43:38.932 [207/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:43:38.932 [208/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:43:38.932 [209/720] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:38.932 [210/720] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:43:38.932 [211/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:43:38.932 [212/720] Linking static target lib/librte_distributor.a 00:43:38.932 [213/720] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:43:39.192 [214/720] Linking target lib/librte_eal.so.24.2 00:43:39.192 [215/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:43:39.192 [216/720] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:43:39.192 [217/720] Linking target lib/librte_ring.so.24.2 00:43:39.192 [218/720] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:43:39.192 [219/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:43:39.192 [220/720] Linking target lib/librte_meter.so.24.2 00:43:39.192 [221/720] Linking target lib/librte_pci.so.24.2 00:43:39.452 [222/720] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:43:39.452 [223/720] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:43:39.452 [224/720] Linking target lib/librte_rcu.so.24.2 00:43:39.452 [225/720] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:43:39.452 [226/720] Linking target lib/librte_mempool.so.24.2 00:43:39.452 [227/720] Linking target lib/librte_timer.so.24.2 00:43:39.452 [228/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:43:39.452 [229/720] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:43:39.452 [230/720] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:43:39.452 [231/720] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:43:39.452 [232/720] Linking target lib/librte_acl.so.24.2 00:43:39.452 [233/720] Linking static target lib/librte_dmadev.a 00:43:39.452 [234/720] Linking target lib/librte_cfgfile.so.24.2 00:43:39.452 [235/720] Linking target lib/librte_mbuf.so.24.2 00:43:39.711 [236/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:43:39.711 [237/720] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:43:39.711 [238/720] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:43:39.712 [239/720] Linking target lib/librte_net.so.24.2 00:43:39.712 [240/720] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:43:39.712 [241/720] Linking target lib/librte_cmdline.so.24.2 00:43:39.972 [242/720] Linking target lib/librte_hash.so.24.2 00:43:39.972 [243/720] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:39.972 [244/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:43:39.972 [245/720] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:43:39.972 [246/720] Linking target lib/librte_bbdev.so.24.2 00:43:39.972 [247/720] Linking target lib/librte_compressdev.so.24.2 00:43:39.972 [248/720] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:43:39.972 [249/720] Linking target lib/librte_distributor.so.24.2 00:43:39.972 [250/720] Linking static target lib/librte_efd.a 00:43:39.972 [251/720] Linking target lib/librte_dmadev.so.24.2 00:43:39.972 [252/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:43:39.972 [253/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:43:40.232 [254/720] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:43:40.232 [255/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:43:40.232 [256/720] Linking static target lib/librte_cryptodev.a 00:43:40.232 [257/720] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:43:40.232 [258/720] Linking target lib/librte_efd.so.24.2 00:43:40.492 [259/720] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:43:40.492 [260/720] Linking static target lib/librte_dispatcher.a 00:43:40.752 [261/720] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:43:40.752 [262/720] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:43:40.752 [263/720] Linking static target lib/librte_gpudev.a 00:43:40.752 [264/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:43:40.752 [265/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:43:41.011 [266/720] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:43:41.011 [267/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:43:41.011 [268/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:43:41.271 [269/720] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:43:41.271 [270/720] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:41.271 [271/720] Linking target lib/librte_cryptodev.so.24.2 00:43:41.271 [272/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:43:41.271 [273/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:43:41.271 [274/720] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:41.271 [275/720] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:43:41.531 [276/720] Linking target lib/librte_gpudev.so.24.2 00:43:41.531 [277/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:43:41.531 [278/720] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:43:41.531 [279/720] Linking static target lib/librte_gro.a 00:43:41.531 [280/720] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:43:41.791 [281/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:43:41.791 [282/720] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:43:41.791 [283/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:43:41.791 [284/720] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:43:41.791 [285/720] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:43:41.791 [286/720] Linking static target lib/librte_gso.a 00:43:41.791 [287/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:43:41.791 [288/720] Linking static target lib/librte_eventdev.a 00:43:41.791 [289/720] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:43:42.050 [290/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:43:42.050 [291/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:43:42.050 [292/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:43:42.050 [293/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:43:42.050 [294/720] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:43:42.050 [295/720] Linking static target lib/librte_jobstats.a 00:43:42.050 [296/720] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:42.380 [297/720] Linking target lib/librte_ethdev.so.24.2 00:43:42.380 [298/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:43:42.380 [299/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:43:42.380 [300/720] Linking static target lib/librte_ip_frag.a 00:43:42.380 [301/720] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:43:42.380 [302/720] Linking target lib/librte_metrics.so.24.2 00:43:42.380 [303/720] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:43:42.380 [304/720] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:43:42.380 [305/720] Linking target lib/librte_bpf.so.24.2 00:43:42.380 [306/720] Linking target lib/librte_gro.so.24.2 00:43:42.380 [307/720] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:43:42.380 [308/720] Linking target lib/librte_gso.so.24.2 00:43:42.380 [309/720] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:43:42.654 [310/720] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:43:42.654 [311/720] Linking target lib/librte_bitratestats.so.24.2 00:43:42.654 [312/720] Linking target lib/librte_jobstats.so.24.2 00:43:42.654 [313/720] Linking static target lib/librte_latencystats.a 00:43:42.654 [314/720] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:43:42.654 [315/720] Linking static target lib/member/libsketch_avx512_tmp.a 00:43:42.654 [316/720] Linking target lib/librte_ip_frag.so.24.2 00:43:42.654 [317/720] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:43:42.654 [318/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:43:42.654 [319/720] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:43:42.654 [320/720] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:43:42.654 [321/720] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:43:42.654 [322/720] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:43:42.654 [323/720] Linking target lib/librte_latencystats.so.24.2 00:43:42.912 [324/720] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:43:42.912 [325/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:43:42.912 [326/720] Linking static target lib/librte_lpm.a 00:43:42.912 [327/720] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:43:43.172 [328/720] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:43:43.172 [329/720] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:43:43.172 [330/720] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:43:43.172 [331/720] Linking static target lib/librte_pcapng.a 00:43:43.172 [332/720] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:43:43.172 [333/720] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:43:43.172 [334/720] Linking target lib/librte_lpm.so.24.2 00:43:43.172 [335/720] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:43:43.172 [336/720] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:43:43.431 [337/720] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:43:43.431 [338/720] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:43:43.431 [339/720] Linking target lib/librte_pcapng.so.24.2 00:43:43.431 [340/720] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:43:43.431 [341/720] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:43:43.431 [342/720] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:43:43.688 [343/720] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:43:43.688 [344/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:43:43.688 [345/720] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:43:43.689 [346/720] Linking static target lib/librte_power.a 00:43:43.689 [347/720] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:43.689 [348/720] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:43:43.689 [349/720] Linking static target lib/librte_regexdev.a 00:43:43.689 [350/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:43:43.689 [351/720] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:43:43.689 [352/720] Linking static target lib/librte_rawdev.a 00:43:43.689 [353/720] Linking target lib/librte_eventdev.so.24.2 00:43:43.947 [354/720] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:43:43.947 [355/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:43:43.947 [356/720] Linking target lib/librte_dispatcher.so.24.2 00:43:43.947 [357/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:43:43.947 [358/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:43:44.206 [359/720] Linking static target lib/librte_mldev.a 00:43:44.206 [360/720] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:43:44.206 [361/720] Linking static target lib/librte_member.a 00:43:44.206 [362/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:43:44.206 [363/720] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.206 [364/720] Linking target lib/librte_rawdev.so.24.2 00:43:44.206 [365/720] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.206 [366/720] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:43:44.206 [367/720] Linking target lib/librte_power.so.24.2 00:43:44.464 [368/720] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:43:44.464 [369/720] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.464 [370/720] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.464 [371/720] Linking target lib/librte_regexdev.so.24.2 00:43:44.464 [372/720] Linking target lib/librte_member.so.24.2 00:43:44.464 [373/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:43:44.464 [374/720] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:43:44.464 [375/720] Linking static target lib/librte_rib.a 00:43:44.464 [376/720] Linking static target lib/librte_reorder.a 00:43:44.464 [377/720] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:43:44.723 [378/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:43:44.723 [379/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:43:44.723 [380/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:43:44.723 [381/720] Linking static target lib/librte_stack.a 00:43:44.723 [382/720] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.723 [383/720] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:43:44.723 [384/720] Linking target lib/librte_reorder.so.24.2 00:43:44.723 [385/720] Linking static target lib/librte_security.a 00:43:44.982 [386/720] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:43:44.982 [387/720] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.982 [388/720] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:43:44.982 [389/720] Linking target lib/librte_rib.so.24.2 00:43:44.982 [390/720] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:43:44.982 [391/720] Linking target lib/librte_stack.so.24.2 00:43:44.982 [392/720] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:43:44.982 [393/720] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:43:45.240 [394/720] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:43:45.240 [395/720] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:43:45.240 [396/720] Linking target lib/librte_security.so.24.2 00:43:45.240 [397/720] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:43:45.240 [398/720] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:43:45.240 [399/720] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:45.497 [400/720] Linking target lib/librte_mldev.so.24.2 00:43:45.497 [401/720] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:43:45.497 [402/720] Linking static target lib/librte_sched.a 00:43:45.497 [403/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:43:45.756 [404/720] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:43:45.756 [405/720] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:43:46.013 [406/720] Linking target lib/librte_sched.so.24.2 00:43:46.013 [407/720] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:43:46.013 [408/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:43:46.013 [409/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:43:46.013 [410/720] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:43:46.271 [411/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:43:46.272 [412/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:43:46.272 [413/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:43:46.272 [414/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:43:46.272 [415/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:43:46.530 [416/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:43:46.788 [417/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:43:46.788 [418/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:43:46.788 [419/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:43:46.788 [420/720] Linking static target lib/librte_ipsec.a 00:43:46.788 [421/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:43:46.788 [422/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:43:46.788 [423/720] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:43:46.788 [424/720] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:43:47.046 [425/720] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:43:47.046 [426/720] Linking target lib/librte_ipsec.so.24.2 00:43:47.046 [427/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:43:47.046 [428/720] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:43:47.306 [429/720] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:43:47.306 [430/720] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:43:47.605 [431/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:43:47.605 [432/720] Linking static target lib/librte_fib.a 00:43:47.605 [433/720] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:43:47.605 [434/720] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:43:47.605 [435/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:43:47.605 [436/720] Linking static target lib/librte_pdcp.a 00:43:47.605 [437/720] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:43:47.864 [438/720] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:43:47.864 [439/720] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:43:47.864 [440/720] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:43:47.864 [441/720] Linking target lib/librte_fib.so.24.2 00:43:47.864 [442/720] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:43:48.122 [443/720] Linking target lib/librte_pdcp.so.24.2 00:43:48.122 [444/720] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:43:48.380 [445/720] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:43:48.380 [446/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:43:48.380 [447/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:43:48.380 [448/720] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:43:48.380 [449/720] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:43:48.639 [450/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:43:48.639 [451/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:43:48.639 [452/720] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:43:48.639 [453/720] Linking static target lib/librte_port.a 00:43:48.898 [454/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:43:48.899 [455/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:43:48.899 [456/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:43:48.899 [457/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:43:48.899 [458/720] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:43:49.158 [459/720] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:43:49.158 [460/720] Linking static target lib/librte_pdump.a 00:43:49.158 [461/720] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:43:49.158 [462/720] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:43:49.158 [463/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:43:49.417 [464/720] Linking target lib/librte_port.so.24.2 00:43:49.417 [465/720] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:43:49.417 [466/720] Linking target lib/librte_pdump.so.24.2 00:43:49.418 [467/720] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:43:49.679 [468/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:43:49.679 [469/720] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:43:49.679 [470/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:43:49.679 [471/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:43:49.679 [472/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:43:49.679 [473/720] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:43:49.679 [474/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:43:49.939 [475/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:43:49.939 [476/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:43:49.939 [477/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:43:49.939 [478/720] Linking static target lib/librte_table.a 00:43:50.199 [479/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:43:50.199 [480/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:43:50.459 [481/720] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:43:50.719 [482/720] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:43:50.719 [483/720] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:43:50.719 [484/720] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:43:50.719 [485/720] Linking target lib/librte_table.so.24.2 00:43:50.719 [486/720] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:43:50.719 [487/720] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:43:50.979 [488/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:43:51.239 [489/720] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:43:51.239 [490/720] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:43:51.239 [491/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:43:51.239 [492/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:43:51.239 [493/720] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:43:51.498 [494/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:43:51.498 [495/720] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:43:51.498 [496/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:43:51.498 [497/720] Linking static target lib/librte_graph.a 00:43:51.498 [498/720] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:43:51.757 [499/720] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:43:51.757 [500/720] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:43:52.016 [501/720] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:43:52.016 [502/720] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:43:52.016 [503/720] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:43:52.016 [504/720] Linking target lib/librte_graph.so.24.2 00:43:52.275 [505/720] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:43:52.275 [506/720] Compiling C object lib/librte_node.a.p/node_null.c.o 00:43:52.275 [507/720] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:43:52.534 [508/720] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:43:52.534 [509/720] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:43:52.534 [510/720] Compiling C object lib/librte_node.a.p/node_log.c.o 00:43:52.534 [511/720] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:43:52.534 [512/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:43:52.534 [513/720] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:43:52.534 [514/720] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:43:52.793 [515/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:43:52.793 [516/720] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:43:53.052 [517/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:43:53.052 [518/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:43:53.052 [519/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:43:53.052 [520/720] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:43:53.052 [521/720] Linking static target lib/librte_node.a 00:43:53.052 [522/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:43:53.311 [523/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:43:53.311 [524/720] Linking static target drivers/libtmp_rte_bus_pci.a 00:43:53.311 [525/720] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:43:53.311 [526/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:43:53.311 [527/720] Linking static target drivers/libtmp_rte_bus_vdev.a 00:43:53.311 [528/720] Linking target lib/librte_node.so.24.2 00:43:53.569 [529/720] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:43:53.569 [530/720] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:43:53.569 [531/720] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:43:53.569 [532/720] Linking static target drivers/librte_bus_pci.a 00:43:53.569 [533/720] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:43:53.569 [534/720] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:43:53.569 [535/720] Linking static target drivers/librte_bus_vdev.a 00:43:53.569 [536/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:43:53.569 [537/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:43:53.569 [538/720] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:43:53.569 [539/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:43:53.828 [540/720] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:53.828 [541/720] Linking target drivers/librte_bus_vdev.so.24.2 00:43:53.828 [542/720] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:43:53.828 [543/720] Linking static target drivers/libtmp_rte_mempool_ring.a 00:43:53.828 [544/720] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:43:53.828 [545/720] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:43:53.828 [546/720] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:43:54.087 [547/720] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:43:54.087 [548/720] Linking static target drivers/librte_mempool_ring.a 00:43:54.087 [549/720] Linking target drivers/librte_bus_pci.so.24.2 00:43:54.087 [550/720] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:43:54.087 [551/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:43:54.087 [552/720] Linking target drivers/librte_mempool_ring.so.24.2 00:43:54.087 [553/720] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:43:54.346 [554/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:43:54.346 [555/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:43:54.605 [556/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:43:54.864 [557/720] Linking static target drivers/net/i40e/base/libi40e_base.a 00:43:55.122 [558/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:43:55.380 [559/720] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:43:55.380 [560/720] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:43:55.380 [561/720] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:43:55.380 [562/720] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:43:55.638 [563/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:43:55.638 [564/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:43:55.638 [565/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:43:55.897 [566/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:43:55.897 [567/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:43:56.156 [568/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:43:56.156 [569/720] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:43:56.156 [570/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:43:56.414 [571/720] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:43:56.414 [572/720] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:43:56.678 [573/720] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:43:56.997 [574/720] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:43:56.997 [575/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:43:56.997 [576/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:43:56.997 [577/720] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:43:56.997 [578/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:43:56.997 [579/720] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:43:57.284 [580/720] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:43:57.284 [581/720] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:43:57.284 [582/720] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:43:57.285 [583/720] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:43:57.543 [584/720] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:43:57.543 [585/720] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:43:57.543 [586/720] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:43:57.543 [587/720] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:43:57.543 [588/720] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:43:57.801 [589/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:43:57.801 [590/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:43:57.801 [591/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:43:57.801 [592/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:43:57.801 [593/720] Linking static target drivers/libtmp_rte_net_i40e.a 00:43:58.060 [594/720] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:43:58.060 [595/720] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:43:58.060 [596/720] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:43:58.060 [597/720] Linking static target drivers/librte_net_i40e.a 00:43:58.319 [598/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:43:58.319 [599/720] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:43:58.319 [600/720] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:43:58.319 [601/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:43:58.578 [602/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:43:58.578 [603/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:43:58.837 [604/720] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:43:58.837 [605/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:43:58.837 [606/720] Linking target drivers/librte_net_i40e.so.24.2 00:43:58.837 [607/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:43:58.837 [608/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:43:59.110 [609/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:43:59.110 [610/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:43:59.110 [611/720] Linking static target lib/librte_vhost.a 00:43:59.110 [612/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:43:59.369 [613/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:43:59.369 [614/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:43:59.628 [615/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:43:59.628 [616/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:43:59.628 [617/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:43:59.628 [618/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:43:59.628 [619/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:43:59.628 [620/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:43:59.887 [621/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:43:59.887 [622/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:43:59.887 [623/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:43:59.887 [624/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:43:59.887 [625/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:44:00.146 [626/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:44:00.146 [627/720] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:44:00.146 [628/720] Linking target lib/librte_vhost.so.24.2 00:44:00.406 [629/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:44:00.406 [630/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:44:00.406 [631/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:44:00.665 [632/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:44:01.234 [633/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:44:01.234 [634/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:44:01.234 [635/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:44:01.234 [636/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:44:01.234 [637/720] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:44:01.234 [638/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:44:01.234 [639/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:44:01.493 [640/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:44:01.493 [641/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:44:01.493 [642/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:44:01.493 [643/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:44:01.493 [644/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:44:01.751 [645/720] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:44:01.752 [646/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:44:02.010 [647/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:44:02.010 [648/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:44:02.010 [649/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:44:02.010 [650/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:44:02.010 [651/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:44:02.268 [652/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:44:02.268 [653/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:44:02.268 [654/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:44:02.268 [655/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:44:02.526 [656/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:44:02.526 [657/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:44:02.526 [658/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:44:02.785 [659/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:44:02.785 [660/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:44:02.785 [661/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:44:02.785 [662/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:44:02.785 [663/720] Linking static target lib/librte_pipeline.a 00:44:02.785 [664/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:44:02.785 [665/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:44:02.785 [666/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:44:03.045 [667/720] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:44:03.045 [668/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:44:03.045 [669/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:44:03.045 [670/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:44:03.303 [671/720] Linking target app/dpdk-dumpcap 00:44:03.303 [672/720] Linking target app/dpdk-graph 00:44:03.303 [673/720] Linking target app/dpdk-pdump 00:44:03.561 [674/720] Linking target app/dpdk-proc-info 00:44:03.561 [675/720] Linking target app/dpdk-test-acl 00:44:03.561 [676/720] Linking target app/dpdk-test-bbdev 00:44:03.561 [677/720] Linking target app/dpdk-test-cmdline 00:44:03.561 [678/720] Linking target app/dpdk-test-compress-perf 00:44:03.822 [679/720] Linking target app/dpdk-test-crypto-perf 00:44:03.822 [680/720] Linking target app/dpdk-test-dma-perf 00:44:04.081 [681/720] Linking target app/dpdk-test-eventdev 00:44:04.081 [682/720] Linking target app/dpdk-test-fib 00:44:04.081 [683/720] Linking target app/dpdk-test-flow-perf 00:44:04.081 [684/720] Linking target app/dpdk-test-gpudev 00:44:04.081 [685/720] Linking target app/dpdk-test-mldev 00:44:04.081 [686/720] Linking target app/dpdk-test-pipeline 00:44:04.338 [687/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:44:04.338 [688/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:44:04.596 [689/720] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:44:04.596 [690/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:44:04.596 [691/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:44:04.854 [692/720] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:44:04.854 [693/720] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:44:04.854 [694/720] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:44:05.113 [695/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:44:05.113 [696/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:44:05.371 [697/720] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:44:05.371 [698/720] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:44:05.371 [699/720] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:44:05.371 [700/720] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:44:05.371 [701/720] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:44:05.371 [702/720] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:44:05.629 [703/720] Linking target lib/librte_pipeline.so.24.2 00:44:05.887 [704/720] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:44:05.887 [705/720] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:44:05.887 [706/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:44:06.145 [707/720] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:44:06.145 [708/720] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:44:06.145 [709/720] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:44:06.408 [710/720] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:44:06.408 [711/720] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:44:06.408 [712/720] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:44:06.408 [713/720] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:44:06.408 [714/720] Linking target app/dpdk-test-sad 00:44:06.408 [715/720] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:44:06.408 [716/720] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:44:06.666 [717/720] Linking target app/dpdk-test-regex 00:44:06.666 [718/720] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:44:06.923 [719/720] Linking target app/dpdk-testpmd 00:44:07.181 [720/720] Linking target app/dpdk-test-security-perf 00:44:07.181 10:39:14 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:44:07.181 10:39:14 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:44:07.181 10:39:14 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:44:07.181 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:44:07.181 [0/1] Installing files. 00:44:07.442 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:44:07.442 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:44:07.442 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.443 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:44:07.444 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.445 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:44:07.446 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:44:07.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:44:07.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:44:07.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:44:07.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:44:07.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:44:07.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:44:07.706 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.706 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.707 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:44:07.969 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:44:07.969 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:44:07.969 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:44:07.969 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:44:07.969 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.969 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.969 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.969 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.969 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.969 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.971 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:44:07.972 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:44:07.972 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:44:07.972 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:44:07.972 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:44:07.972 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:44:07.972 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:44:07.972 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:44:07.972 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:44:07.972 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:44:07.972 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:44:07.972 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:44:07.972 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:44:07.972 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:44:07.972 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:44:07.972 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:44:07.972 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:44:07.972 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:44:07.973 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:44:07.973 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:44:07.973 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:44:07.973 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:44:07.973 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:44:07.973 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:44:07.973 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:44:07.973 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:44:07.973 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:44:07.973 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:44:07.973 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:44:07.973 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:44:07.973 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:44:07.973 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:44:07.973 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:44:07.973 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:44:07.973 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:44:07.973 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:44:07.973 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:44:07.973 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:44:07.973 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:44:07.973 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:44:07.973 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:44:07.973 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:44:07.973 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:44:07.973 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:44:07.973 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:44:07.973 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:44:07.973 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:44:07.973 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:44:07.973 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:44:07.973 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:44:07.973 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:44:07.973 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:44:07.973 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:44:07.973 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:44:07.973 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:44:07.973 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:44:07.973 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:44:07.973 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:44:07.973 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:44:07.973 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:44:07.973 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:44:07.973 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:44:07.973 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:44:07.973 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:44:07.973 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:44:07.973 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:44:07.973 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:44:07.973 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:44:07.973 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:44:07.973 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:44:07.973 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:44:07.973 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:44:07.973 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:44:07.973 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:44:07.973 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:44:07.973 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:44:07.973 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:44:07.973 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:44:07.973 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:44:07.973 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:44:07.973 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:44:07.973 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:44:07.973 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:44:07.973 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:44:07.973 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:44:07.973 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:44:07.973 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:44:07.973 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:44:07.973 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:44:07.973 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:44:07.973 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:44:07.973 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:44:07.973 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:44:07.973 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:44:07.973 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:44:07.973 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:44:07.973 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:44:07.973 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:44:07.973 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:44:07.973 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:44:07.973 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:44:07.973 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:44:07.973 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:44:07.973 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:44:07.973 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:44:07.973 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:44:07.973 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:44:07.973 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:44:07.973 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:44:07.973 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:44:07.973 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:44:07.973 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:44:07.973 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:44:07.973 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:44:07.973 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:44:07.973 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:44:07.974 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:44:07.974 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:44:07.974 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:44:07.974 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:44:07.974 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:44:07.974 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:44:07.974 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:44:07.974 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:44:07.974 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:44:07.974 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:44:07.974 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:44:07.974 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:44:07.974 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:44:07.974 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:44:07.974 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:44:07.974 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:44:07.974 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:44:07.974 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:44:07.974 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:44:07.974 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:44:07.974 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:44:07.974 10:39:15 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:44:07.974 10:39:15 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:44:07.974 00:44:07.974 real 0m45.154s 00:44:07.974 user 5m2.914s 00:44:07.974 sys 1m1.046s 00:44:07.974 10:39:15 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:44:07.974 ************************************ 00:44:07.974 END TEST build_native_dpdk 00:44:07.974 ************************************ 00:44:07.974 10:39:15 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:44:08.232 10:39:15 -- common/autotest_common.sh@1142 -- $ return 0 00:44:08.232 10:39:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:44:08.232 10:39:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:44:08.232 10:39:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:44:08.232 10:39:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:44:08.232 10:39:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:44:08.232 10:39:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:44:08.232 10:39:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:44:08.232 10:39:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:44:08.232 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:44:08.491 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:44:08.491 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:44:08.491 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:08.749 Using 'verbs' RDMA provider 00:44:25.008 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:44:39.927 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:44:39.927 go version go1.21.1 linux/amd64 00:44:40.493 Creating mk/config.mk...done. 00:44:40.493 Creating mk/cc.flags.mk...done. 00:44:40.493 Type 'make' to build. 00:44:40.493 10:39:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:44:40.493 10:39:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:44:40.493 10:39:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:44:40.493 10:39:48 -- common/autotest_common.sh@10 -- $ set +x 00:44:40.493 ************************************ 00:44:40.493 START TEST make 00:44:40.493 ************************************ 00:44:40.493 10:39:48 make -- common/autotest_common.sh@1123 -- $ make -j10 00:44:40.753 make[1]: Nothing to be done for 'all'. 00:45:02.684 CC lib/ut/ut.o 00:45:02.684 CC lib/ut_mock/mock.o 00:45:02.684 CC lib/log/log_flags.o 00:45:02.684 CC lib/log/log.o 00:45:02.684 CC lib/log/log_deprecated.o 00:45:02.684 LIB libspdk_ut_mock.a 00:45:02.684 LIB libspdk_log.a 00:45:02.684 SO libspdk_ut_mock.so.6.0 00:45:02.684 SO libspdk_log.so.7.0 00:45:02.684 LIB libspdk_ut.a 00:45:02.684 SYMLINK libspdk_ut_mock.so 00:45:02.684 SO libspdk_ut.so.2.0 00:45:02.684 SYMLINK libspdk_log.so 00:45:02.684 SYMLINK libspdk_ut.so 00:45:02.684 CC lib/dma/dma.o 00:45:02.684 CXX lib/trace_parser/trace.o 00:45:02.684 CC lib/util/base64.o 00:45:02.684 CC lib/util/bit_array.o 00:45:02.684 CC lib/util/crc16.o 00:45:02.684 CC lib/util/crc32.o 00:45:02.684 CC lib/util/crc32c.o 00:45:02.684 CC lib/util/cpuset.o 00:45:02.684 CC lib/ioat/ioat.o 00:45:02.684 CC lib/util/crc32_ieee.o 00:45:02.684 CC lib/vfio_user/host/vfio_user_pci.o 00:45:02.684 CC lib/vfio_user/host/vfio_user.o 00:45:02.684 CC lib/util/crc64.o 00:45:02.684 CC lib/util/dif.o 00:45:02.684 LIB libspdk_dma.a 00:45:02.684 CC lib/util/fd.o 00:45:02.684 CC lib/util/fd_group.o 00:45:02.684 SO libspdk_dma.so.4.0 00:45:02.684 CC lib/util/file.o 00:45:02.684 SYMLINK libspdk_dma.so 00:45:02.684 CC lib/util/hexlify.o 00:45:02.684 LIB libspdk_ioat.a 00:45:02.684 CC lib/util/iov.o 00:45:02.684 SO libspdk_ioat.so.7.0 00:45:02.684 CC lib/util/math.o 00:45:02.684 CC lib/util/net.o 00:45:02.684 LIB libspdk_vfio_user.a 00:45:02.684 SYMLINK libspdk_ioat.so 00:45:02.684 CC lib/util/pipe.o 00:45:02.684 CC lib/util/strerror_tls.o 00:45:02.684 SO libspdk_vfio_user.so.5.0 00:45:02.684 CC lib/util/string.o 00:45:02.684 CC lib/util/uuid.o 00:45:02.684 SYMLINK libspdk_vfio_user.so 00:45:02.684 CC lib/util/xor.o 00:45:02.684 CC lib/util/zipf.o 00:45:02.684 LIB libspdk_util.a 00:45:02.684 SO libspdk_util.so.10.0 00:45:02.684 LIB libspdk_trace_parser.a 00:45:02.684 SO libspdk_trace_parser.so.5.0 00:45:02.684 SYMLINK libspdk_util.so 00:45:02.684 SYMLINK libspdk_trace_parser.so 00:45:02.684 CC lib/conf/conf.o 00:45:02.684 CC lib/rdma_utils/rdma_utils.o 00:45:02.684 CC lib/json/json_parse.o 00:45:02.684 CC lib/json/json_util.o 00:45:02.684 CC lib/json/json_write.o 00:45:02.684 CC lib/rdma_provider/common.o 00:45:02.684 CC lib/rdma_provider/rdma_provider_verbs.o 00:45:02.684 CC lib/vmd/vmd.o 00:45:02.684 CC lib/idxd/idxd.o 00:45:02.684 CC lib/env_dpdk/env.o 00:45:02.684 CC lib/env_dpdk/memory.o 00:45:02.684 LIB libspdk_rdma_provider.a 00:45:02.684 SO libspdk_rdma_provider.so.6.0 00:45:02.684 LIB libspdk_conf.a 00:45:02.684 CC lib/idxd/idxd_user.o 00:45:02.684 SO libspdk_conf.so.6.0 00:45:02.684 LIB libspdk_rdma_utils.a 00:45:02.684 SYMLINK libspdk_rdma_provider.so 00:45:02.684 CC lib/vmd/led.o 00:45:02.684 CC lib/env_dpdk/pci.o 00:45:02.684 LIB libspdk_json.a 00:45:02.684 SO libspdk_rdma_utils.so.1.0 00:45:02.684 SYMLINK libspdk_conf.so 00:45:02.684 SO libspdk_json.so.6.0 00:45:02.684 CC lib/env_dpdk/init.o 00:45:02.684 SYMLINK libspdk_rdma_utils.so 00:45:02.684 CC lib/idxd/idxd_kernel.o 00:45:02.684 SYMLINK libspdk_json.so 00:45:02.684 CC lib/env_dpdk/threads.o 00:45:02.684 CC lib/env_dpdk/pci_ioat.o 00:45:02.684 CC lib/env_dpdk/pci_virtio.o 00:45:02.684 LIB libspdk_idxd.a 00:45:02.684 CC lib/env_dpdk/pci_vmd.o 00:45:02.684 LIB libspdk_vmd.a 00:45:02.684 CC lib/env_dpdk/pci_idxd.o 00:45:02.684 SO libspdk_idxd.so.12.0 00:45:02.684 SO libspdk_vmd.so.6.0 00:45:02.684 CC lib/env_dpdk/pci_event.o 00:45:02.684 CC lib/env_dpdk/sigbus_handler.o 00:45:02.684 CC lib/jsonrpc/jsonrpc_server.o 00:45:02.684 SYMLINK libspdk_idxd.so 00:45:02.684 CC lib/env_dpdk/pci_dpdk.o 00:45:02.684 SYMLINK libspdk_vmd.so 00:45:02.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:45:02.684 CC lib/env_dpdk/pci_dpdk_2207.o 00:45:02.684 CC lib/env_dpdk/pci_dpdk_2211.o 00:45:02.684 CC lib/jsonrpc/jsonrpc_client.o 00:45:02.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:45:02.684 LIB libspdk_jsonrpc.a 00:45:02.684 SO libspdk_jsonrpc.so.6.0 00:45:02.684 SYMLINK libspdk_jsonrpc.so 00:45:02.948 LIB libspdk_env_dpdk.a 00:45:02.948 SO libspdk_env_dpdk.so.14.1 00:45:03.287 SYMLINK libspdk_env_dpdk.so 00:45:03.287 CC lib/rpc/rpc.o 00:45:03.287 LIB libspdk_rpc.a 00:45:03.287 SO libspdk_rpc.so.6.0 00:45:03.575 SYMLINK libspdk_rpc.so 00:45:03.833 CC lib/trace/trace.o 00:45:03.833 CC lib/trace/trace_flags.o 00:45:03.833 CC lib/trace/trace_rpc.o 00:45:03.833 CC lib/keyring/keyring.o 00:45:03.833 CC lib/keyring/keyring_rpc.o 00:45:03.833 CC lib/notify/notify.o 00:45:03.833 CC lib/notify/notify_rpc.o 00:45:04.092 LIB libspdk_notify.a 00:45:04.092 SO libspdk_notify.so.6.0 00:45:04.092 LIB libspdk_trace.a 00:45:04.092 LIB libspdk_keyring.a 00:45:04.092 SYMLINK libspdk_notify.so 00:45:04.092 SO libspdk_trace.so.10.0 00:45:04.092 SO libspdk_keyring.so.1.0 00:45:04.092 SYMLINK libspdk_trace.so 00:45:04.092 SYMLINK libspdk_keyring.so 00:45:04.662 CC lib/thread/thread.o 00:45:04.662 CC lib/thread/iobuf.o 00:45:04.662 CC lib/sock/sock.o 00:45:04.662 CC lib/sock/sock_rpc.o 00:45:04.921 LIB libspdk_sock.a 00:45:04.921 SO libspdk_sock.so.10.0 00:45:04.921 SYMLINK libspdk_sock.so 00:45:05.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:45:05.500 CC lib/nvme/nvme_ctrlr.o 00:45:05.500 CC lib/nvme/nvme_fabric.o 00:45:05.500 CC lib/nvme/nvme_ns_cmd.o 00:45:05.500 CC lib/nvme/nvme_ns.o 00:45:05.500 CC lib/nvme/nvme_pcie_common.o 00:45:05.500 CC lib/nvme/nvme_pcie.o 00:45:05.500 CC lib/nvme/nvme_qpair.o 00:45:05.500 CC lib/nvme/nvme.o 00:45:05.758 LIB libspdk_thread.a 00:45:05.758 SO libspdk_thread.so.10.1 00:45:05.758 SYMLINK libspdk_thread.so 00:45:05.758 CC lib/nvme/nvme_quirks.o 00:45:06.017 CC lib/nvme/nvme_transport.o 00:45:06.017 CC lib/nvme/nvme_discovery.o 00:45:06.017 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:45:06.017 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:45:06.017 CC lib/nvme/nvme_tcp.o 00:45:06.017 CC lib/nvme/nvme_opal.o 00:45:06.276 CC lib/nvme/nvme_io_msg.o 00:45:06.276 CC lib/nvme/nvme_poll_group.o 00:45:06.276 CC lib/nvme/nvme_zns.o 00:45:06.535 CC lib/nvme/nvme_stubs.o 00:45:06.535 CC lib/nvme/nvme_auth.o 00:45:06.535 CC lib/nvme/nvme_cuse.o 00:45:06.535 CC lib/nvme/nvme_rdma.o 00:45:06.794 CC lib/accel/accel.o 00:45:06.794 CC lib/blob/blobstore.o 00:45:07.052 CC lib/init/json_config.o 00:45:07.052 CC lib/blob/request.o 00:45:07.052 CC lib/virtio/virtio.o 00:45:07.052 CC lib/init/subsystem.o 00:45:07.312 CC lib/virtio/virtio_vhost_user.o 00:45:07.312 CC lib/blob/zeroes.o 00:45:07.312 CC lib/virtio/virtio_vfio_user.o 00:45:07.312 CC lib/virtio/virtio_pci.o 00:45:07.312 CC lib/init/subsystem_rpc.o 00:45:07.312 CC lib/init/rpc.o 00:45:07.312 CC lib/blob/blob_bs_dev.o 00:45:07.312 CC lib/accel/accel_rpc.o 00:45:07.312 CC lib/accel/accel_sw.o 00:45:07.312 LIB libspdk_init.a 00:45:07.571 SO libspdk_init.so.5.0 00:45:07.571 LIB libspdk_virtio.a 00:45:07.571 SYMLINK libspdk_init.so 00:45:07.571 SO libspdk_virtio.so.7.0 00:45:07.571 SYMLINK libspdk_virtio.so 00:45:07.571 LIB libspdk_accel.a 00:45:07.571 LIB libspdk_nvme.a 00:45:07.830 SO libspdk_accel.so.16.0 00:45:07.830 SYMLINK libspdk_accel.so 00:45:07.830 CC lib/event/reactor.o 00:45:07.830 CC lib/event/log_rpc.o 00:45:07.830 CC lib/event/app.o 00:45:07.830 CC lib/event/app_rpc.o 00:45:07.830 CC lib/event/scheduler_static.o 00:45:07.830 SO libspdk_nvme.so.13.1 00:45:08.090 CC lib/bdev/bdev_rpc.o 00:45:08.090 CC lib/bdev/scsi_nvme.o 00:45:08.090 CC lib/bdev/bdev.o 00:45:08.090 CC lib/bdev/bdev_zone.o 00:45:08.090 CC lib/bdev/part.o 00:45:08.090 SYMLINK libspdk_nvme.so 00:45:08.348 LIB libspdk_event.a 00:45:08.349 SO libspdk_event.so.14.0 00:45:08.349 SYMLINK libspdk_event.so 00:45:09.308 LIB libspdk_blob.a 00:45:09.308 SO libspdk_blob.so.11.0 00:45:09.308 SYMLINK libspdk_blob.so 00:45:09.875 CC lib/blobfs/blobfs.o 00:45:09.875 CC lib/blobfs/tree.o 00:45:09.875 CC lib/lvol/lvol.o 00:45:10.132 LIB libspdk_bdev.a 00:45:10.389 SO libspdk_bdev.so.16.0 00:45:10.389 SYMLINK libspdk_bdev.so 00:45:10.389 LIB libspdk_blobfs.a 00:45:10.647 SO libspdk_blobfs.so.10.0 00:45:10.647 LIB libspdk_lvol.a 00:45:10.647 SO libspdk_lvol.so.10.0 00:45:10.647 SYMLINK libspdk_blobfs.so 00:45:10.647 SYMLINK libspdk_lvol.so 00:45:10.647 CC lib/nbd/nbd_rpc.o 00:45:10.647 CC lib/nvmf/ctrlr.o 00:45:10.647 CC lib/scsi/dev.o 00:45:10.647 CC lib/nvmf/ctrlr_discovery.o 00:45:10.647 CC lib/scsi/lun.o 00:45:10.647 CC lib/nbd/nbd.o 00:45:10.647 CC lib/scsi/port.o 00:45:10.647 CC lib/ublk/ublk.o 00:45:10.647 CC lib/nvmf/ctrlr_bdev.o 00:45:10.647 CC lib/ftl/ftl_core.o 00:45:10.905 CC lib/scsi/scsi.o 00:45:10.905 CC lib/scsi/scsi_bdev.o 00:45:10.905 CC lib/ublk/ublk_rpc.o 00:45:10.905 CC lib/scsi/scsi_pr.o 00:45:10.905 CC lib/scsi/scsi_rpc.o 00:45:10.905 LIB libspdk_nbd.a 00:45:10.905 CC lib/ftl/ftl_init.o 00:45:10.905 CC lib/ftl/ftl_layout.o 00:45:10.905 SO libspdk_nbd.so.7.0 00:45:11.164 CC lib/nvmf/subsystem.o 00:45:11.164 CC lib/nvmf/nvmf.o 00:45:11.164 SYMLINK libspdk_nbd.so 00:45:11.164 CC lib/nvmf/nvmf_rpc.o 00:45:11.164 LIB libspdk_ublk.a 00:45:11.164 CC lib/nvmf/transport.o 00:45:11.164 CC lib/nvmf/tcp.o 00:45:11.164 CC lib/scsi/task.o 00:45:11.164 SO libspdk_ublk.so.3.0 00:45:11.164 CC lib/ftl/ftl_debug.o 00:45:11.164 SYMLINK libspdk_ublk.so 00:45:11.164 CC lib/ftl/ftl_io.o 00:45:11.164 CC lib/nvmf/stubs.o 00:45:11.422 LIB libspdk_scsi.a 00:45:11.422 CC lib/ftl/ftl_sb.o 00:45:11.422 SO libspdk_scsi.so.9.0 00:45:11.422 CC lib/nvmf/mdns_server.o 00:45:11.422 SYMLINK libspdk_scsi.so 00:45:11.422 CC lib/nvmf/rdma.o 00:45:11.679 CC lib/ftl/ftl_l2p.o 00:45:11.679 CC lib/nvmf/auth.o 00:45:11.679 CC lib/ftl/ftl_l2p_flat.o 00:45:11.679 CC lib/ftl/ftl_nv_cache.o 00:45:11.937 CC lib/ftl/ftl_band.o 00:45:11.937 CC lib/iscsi/conn.o 00:45:11.937 CC lib/vhost/vhost.o 00:45:11.937 CC lib/vhost/vhost_rpc.o 00:45:11.937 CC lib/vhost/vhost_scsi.o 00:45:11.937 CC lib/ftl/ftl_band_ops.o 00:45:12.195 CC lib/iscsi/init_grp.o 00:45:12.195 CC lib/iscsi/iscsi.o 00:45:12.452 CC lib/iscsi/md5.o 00:45:12.452 CC lib/iscsi/param.o 00:45:12.452 CC lib/vhost/vhost_blk.o 00:45:12.452 CC lib/vhost/rte_vhost_user.o 00:45:12.452 CC lib/ftl/ftl_writer.o 00:45:12.452 CC lib/ftl/ftl_rq.o 00:45:12.452 CC lib/iscsi/portal_grp.o 00:45:12.711 CC lib/iscsi/tgt_node.o 00:45:12.711 CC lib/iscsi/iscsi_subsystem.o 00:45:12.711 CC lib/ftl/ftl_reloc.o 00:45:12.711 CC lib/ftl/ftl_l2p_cache.o 00:45:12.711 CC lib/ftl/ftl_p2l.o 00:45:12.711 CC lib/ftl/mngt/ftl_mngt.o 00:45:12.970 CC lib/iscsi/iscsi_rpc.o 00:45:12.970 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:45:12.970 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:45:12.970 CC lib/ftl/mngt/ftl_mngt_startup.o 00:45:12.970 CC lib/ftl/mngt/ftl_mngt_md.o 00:45:13.230 CC lib/iscsi/task.o 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_misc.o 00:45:13.230 LIB libspdk_nvmf.a 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_band.o 00:45:13.230 SO libspdk_nvmf.so.19.0 00:45:13.230 LIB libspdk_vhost.a 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:45:13.230 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:45:13.488 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:45:13.488 CC lib/ftl/utils/ftl_conf.o 00:45:13.488 CC lib/ftl/utils/ftl_md.o 00:45:13.488 SO libspdk_vhost.so.8.0 00:45:13.488 LIB libspdk_iscsi.a 00:45:13.488 SYMLINK libspdk_nvmf.so 00:45:13.488 CC lib/ftl/utils/ftl_mempool.o 00:45:13.488 CC lib/ftl/utils/ftl_bitmap.o 00:45:13.488 SYMLINK libspdk_vhost.so 00:45:13.488 CC lib/ftl/utils/ftl_property.o 00:45:13.488 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:45:13.488 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:45:13.488 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:45:13.488 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:45:13.746 SO libspdk_iscsi.so.8.0 00:45:13.746 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:45:13.746 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:45:13.746 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:45:13.746 CC lib/ftl/upgrade/ftl_sb_v3.o 00:45:13.746 CC lib/ftl/upgrade/ftl_sb_v5.o 00:45:13.746 CC lib/ftl/nvc/ftl_nvc_dev.o 00:45:13.746 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:45:13.746 CC lib/ftl/base/ftl_base_dev.o 00:45:13.746 SYMLINK libspdk_iscsi.so 00:45:13.746 CC lib/ftl/base/ftl_base_bdev.o 00:45:13.746 CC lib/ftl/ftl_trace.o 00:45:14.005 LIB libspdk_ftl.a 00:45:14.264 SO libspdk_ftl.so.9.0 00:45:14.534 SYMLINK libspdk_ftl.so 00:45:15.101 CC module/env_dpdk/env_dpdk_rpc.o 00:45:15.101 CC module/scheduler/gscheduler/gscheduler.o 00:45:15.101 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:45:15.101 CC module/blob/bdev/blob_bdev.o 00:45:15.101 CC module/accel/error/accel_error.o 00:45:15.101 CC module/sock/posix/posix.o 00:45:15.101 CC module/keyring/file/keyring.o 00:45:15.101 CC module/accel/dsa/accel_dsa.o 00:45:15.101 CC module/scheduler/dynamic/scheduler_dynamic.o 00:45:15.101 CC module/accel/ioat/accel_ioat.o 00:45:15.101 LIB libspdk_env_dpdk_rpc.a 00:45:15.101 SO libspdk_env_dpdk_rpc.so.6.0 00:45:15.358 CC module/keyring/file/keyring_rpc.o 00:45:15.358 LIB libspdk_scheduler_gscheduler.a 00:45:15.358 SYMLINK libspdk_env_dpdk_rpc.so 00:45:15.358 LIB libspdk_scheduler_dpdk_governor.a 00:45:15.358 CC module/accel/ioat/accel_ioat_rpc.o 00:45:15.358 SO libspdk_scheduler_gscheduler.so.4.0 00:45:15.358 SO libspdk_scheduler_dpdk_governor.so.4.0 00:45:15.358 CC module/accel/error/accel_error_rpc.o 00:45:15.358 LIB libspdk_scheduler_dynamic.a 00:45:15.358 SO libspdk_scheduler_dynamic.so.4.0 00:45:15.358 SYMLINK libspdk_scheduler_gscheduler.so 00:45:15.358 SYMLINK libspdk_scheduler_dpdk_governor.so 00:45:15.358 CC module/accel/dsa/accel_dsa_rpc.o 00:45:15.358 LIB libspdk_blob_bdev.a 00:45:15.358 SO libspdk_blob_bdev.so.11.0 00:45:15.358 LIB libspdk_accel_ioat.a 00:45:15.358 LIB libspdk_keyring_file.a 00:45:15.359 SYMLINK libspdk_scheduler_dynamic.so 00:45:15.359 SO libspdk_accel_ioat.so.6.0 00:45:15.359 LIB libspdk_accel_error.a 00:45:15.359 SO libspdk_keyring_file.so.1.0 00:45:15.359 SYMLINK libspdk_blob_bdev.so 00:45:15.359 SO libspdk_accel_error.so.2.0 00:45:15.359 LIB libspdk_accel_dsa.a 00:45:15.359 SYMLINK libspdk_keyring_file.so 00:45:15.359 SYMLINK libspdk_accel_ioat.so 00:45:15.359 CC module/keyring/linux/keyring.o 00:45:15.359 CC module/keyring/linux/keyring_rpc.o 00:45:15.359 SYMLINK libspdk_accel_error.so 00:45:15.359 SO libspdk_accel_dsa.so.5.0 00:45:15.617 CC module/accel/iaa/accel_iaa.o 00:45:15.617 SYMLINK libspdk_accel_dsa.so 00:45:15.617 LIB libspdk_keyring_linux.a 00:45:15.617 SO libspdk_keyring_linux.so.1.0 00:45:15.617 CC module/bdev/error/vbdev_error.o 00:45:15.617 CC module/bdev/gpt/gpt.o 00:45:15.617 CC module/blobfs/bdev/blobfs_bdev.o 00:45:15.617 CC module/bdev/lvol/vbdev_lvol.o 00:45:15.617 CC module/bdev/delay/vbdev_delay.o 00:45:15.617 LIB libspdk_sock_posix.a 00:45:15.617 CC module/accel/iaa/accel_iaa_rpc.o 00:45:15.617 SYMLINK libspdk_keyring_linux.so 00:45:15.617 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:45:15.617 CC module/bdev/malloc/bdev_malloc.o 00:45:15.617 SO libspdk_sock_posix.so.6.0 00:45:15.875 CC module/bdev/null/bdev_null.o 00:45:15.875 SYMLINK libspdk_sock_posix.so 00:45:15.875 CC module/bdev/null/bdev_null_rpc.o 00:45:15.875 LIB libspdk_accel_iaa.a 00:45:15.875 CC module/bdev/gpt/vbdev_gpt.o 00:45:15.875 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:45:15.875 SO libspdk_accel_iaa.so.3.0 00:45:15.875 CC module/bdev/error/vbdev_error_rpc.o 00:45:15.875 SYMLINK libspdk_accel_iaa.so 00:45:15.875 CC module/bdev/malloc/bdev_malloc_rpc.o 00:45:15.875 LIB libspdk_blobfs_bdev.a 00:45:15.875 CC module/bdev/delay/vbdev_delay_rpc.o 00:45:15.875 LIB libspdk_bdev_null.a 00:45:16.134 SO libspdk_blobfs_bdev.so.6.0 00:45:16.134 SO libspdk_bdev_null.so.6.0 00:45:16.134 LIB libspdk_bdev_error.a 00:45:16.134 CC module/bdev/nvme/bdev_nvme.o 00:45:16.134 LIB libspdk_bdev_gpt.a 00:45:16.134 SO libspdk_bdev_error.so.6.0 00:45:16.134 SYMLINK libspdk_blobfs_bdev.so 00:45:16.134 SO libspdk_bdev_gpt.so.6.0 00:45:16.134 LIB libspdk_bdev_lvol.a 00:45:16.134 SYMLINK libspdk_bdev_null.so 00:45:16.134 LIB libspdk_bdev_malloc.a 00:45:16.134 SYMLINK libspdk_bdev_error.so 00:45:16.134 SO libspdk_bdev_lvol.so.6.0 00:45:16.134 LIB libspdk_bdev_delay.a 00:45:16.134 SO libspdk_bdev_malloc.so.6.0 00:45:16.134 SYMLINK libspdk_bdev_gpt.so 00:45:16.134 CC module/bdev/nvme/bdev_nvme_rpc.o 00:45:16.134 CC module/bdev/passthru/vbdev_passthru.o 00:45:16.134 CC module/bdev/raid/bdev_raid.o 00:45:16.134 SO libspdk_bdev_delay.so.6.0 00:45:16.134 SYMLINK libspdk_bdev_lvol.so 00:45:16.134 SYMLINK libspdk_bdev_malloc.so 00:45:16.134 CC module/bdev/raid/bdev_raid_rpc.o 00:45:16.134 CC module/bdev/split/vbdev_split.o 00:45:16.134 SYMLINK libspdk_bdev_delay.so 00:45:16.393 CC module/bdev/raid/bdev_raid_sb.o 00:45:16.393 CC module/bdev/zone_block/vbdev_zone_block.o 00:45:16.393 CC module/bdev/aio/bdev_aio.o 00:45:16.393 CC module/bdev/ftl/bdev_ftl.o 00:45:16.393 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:45:16.393 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:45:16.393 CC module/bdev/split/vbdev_split_rpc.o 00:45:16.393 CC module/bdev/aio/bdev_aio_rpc.o 00:45:16.652 LIB libspdk_bdev_zone_block.a 00:45:16.652 CC module/bdev/ftl/bdev_ftl_rpc.o 00:45:16.652 LIB libspdk_bdev_passthru.a 00:45:16.652 CC module/bdev/nvme/nvme_rpc.o 00:45:16.652 SO libspdk_bdev_zone_block.so.6.0 00:45:16.652 LIB libspdk_bdev_split.a 00:45:16.652 SO libspdk_bdev_passthru.so.6.0 00:45:16.652 LIB libspdk_bdev_aio.a 00:45:16.652 SO libspdk_bdev_split.so.6.0 00:45:16.652 SYMLINK libspdk_bdev_zone_block.so 00:45:16.652 SO libspdk_bdev_aio.so.6.0 00:45:16.652 CC module/bdev/raid/raid0.o 00:45:16.652 SYMLINK libspdk_bdev_passthru.so 00:45:16.652 CC module/bdev/nvme/bdev_mdns_client.o 00:45:16.652 SYMLINK libspdk_bdev_split.so 00:45:16.652 CC module/bdev/iscsi/bdev_iscsi.o 00:45:16.652 CC module/bdev/nvme/vbdev_opal.o 00:45:16.652 SYMLINK libspdk_bdev_aio.so 00:45:16.652 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:45:16.652 LIB libspdk_bdev_ftl.a 00:45:16.910 SO libspdk_bdev_ftl.so.6.0 00:45:16.910 CC module/bdev/nvme/vbdev_opal_rpc.o 00:45:16.910 CC module/bdev/virtio/bdev_virtio_scsi.o 00:45:16.910 SYMLINK libspdk_bdev_ftl.so 00:45:16.910 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:45:16.910 CC module/bdev/raid/raid1.o 00:45:16.910 CC module/bdev/raid/concat.o 00:45:16.910 CC module/bdev/virtio/bdev_virtio_blk.o 00:45:16.910 CC module/bdev/virtio/bdev_virtio_rpc.o 00:45:17.169 LIB libspdk_bdev_iscsi.a 00:45:17.169 SO libspdk_bdev_iscsi.so.6.0 00:45:17.169 SYMLINK libspdk_bdev_iscsi.so 00:45:17.169 LIB libspdk_bdev_raid.a 00:45:17.169 SO libspdk_bdev_raid.so.6.0 00:45:17.427 LIB libspdk_bdev_virtio.a 00:45:17.427 SYMLINK libspdk_bdev_raid.so 00:45:17.427 SO libspdk_bdev_virtio.so.6.0 00:45:17.427 SYMLINK libspdk_bdev_virtio.so 00:45:17.994 LIB libspdk_bdev_nvme.a 00:45:17.994 SO libspdk_bdev_nvme.so.7.0 00:45:17.994 SYMLINK libspdk_bdev_nvme.so 00:45:18.932 CC module/event/subsystems/keyring/keyring.o 00:45:18.932 CC module/event/subsystems/scheduler/scheduler.o 00:45:18.932 CC module/event/subsystems/sock/sock.o 00:45:18.932 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:45:18.932 CC module/event/subsystems/iobuf/iobuf.o 00:45:18.932 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:45:18.932 CC module/event/subsystems/vmd/vmd.o 00:45:18.932 CC module/event/subsystems/vmd/vmd_rpc.o 00:45:18.932 LIB libspdk_event_keyring.a 00:45:18.932 LIB libspdk_event_scheduler.a 00:45:18.932 LIB libspdk_event_sock.a 00:45:18.932 LIB libspdk_event_vhost_blk.a 00:45:18.932 SO libspdk_event_keyring.so.1.0 00:45:18.932 LIB libspdk_event_iobuf.a 00:45:18.932 SO libspdk_event_sock.so.5.0 00:45:18.932 SO libspdk_event_scheduler.so.4.0 00:45:18.932 LIB libspdk_event_vmd.a 00:45:18.932 SO libspdk_event_vhost_blk.so.3.0 00:45:18.932 SO libspdk_event_iobuf.so.3.0 00:45:18.932 SYMLINK libspdk_event_keyring.so 00:45:18.932 SYMLINK libspdk_event_sock.so 00:45:18.932 SYMLINK libspdk_event_scheduler.so 00:45:18.932 SO libspdk_event_vmd.so.6.0 00:45:18.932 SYMLINK libspdk_event_vhost_blk.so 00:45:18.932 SYMLINK libspdk_event_iobuf.so 00:45:18.932 SYMLINK libspdk_event_vmd.so 00:45:19.224 CC module/event/subsystems/accel/accel.o 00:45:19.529 LIB libspdk_event_accel.a 00:45:19.529 SO libspdk_event_accel.so.6.0 00:45:19.529 SYMLINK libspdk_event_accel.so 00:45:20.094 CC module/event/subsystems/bdev/bdev.o 00:45:20.094 LIB libspdk_event_bdev.a 00:45:20.094 SO libspdk_event_bdev.so.6.0 00:45:20.352 SYMLINK libspdk_event_bdev.so 00:45:20.611 CC module/event/subsystems/nbd/nbd.o 00:45:20.611 CC module/event/subsystems/ublk/ublk.o 00:45:20.611 CC module/event/subsystems/scsi/scsi.o 00:45:20.611 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:45:20.611 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:45:20.869 LIB libspdk_event_nbd.a 00:45:20.869 LIB libspdk_event_ublk.a 00:45:20.869 SO libspdk_event_nbd.so.6.0 00:45:20.869 LIB libspdk_event_scsi.a 00:45:20.869 SO libspdk_event_scsi.so.6.0 00:45:20.869 SO libspdk_event_ublk.so.3.0 00:45:20.869 SYMLINK libspdk_event_nbd.so 00:45:20.869 LIB libspdk_event_nvmf.a 00:45:20.869 SYMLINK libspdk_event_scsi.so 00:45:20.869 SYMLINK libspdk_event_ublk.so 00:45:20.869 SO libspdk_event_nvmf.so.6.0 00:45:20.869 SYMLINK libspdk_event_nvmf.so 00:45:21.436 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:45:21.436 CC module/event/subsystems/iscsi/iscsi.o 00:45:21.436 LIB libspdk_event_vhost_scsi.a 00:45:21.436 LIB libspdk_event_iscsi.a 00:45:21.436 SO libspdk_event_vhost_scsi.so.3.0 00:45:21.436 SO libspdk_event_iscsi.so.6.0 00:45:21.436 SYMLINK libspdk_event_vhost_scsi.so 00:45:21.695 SYMLINK libspdk_event_iscsi.so 00:45:21.695 SO libspdk.so.6.0 00:45:21.695 SYMLINK libspdk.so 00:45:21.953 CC app/spdk_lspci/spdk_lspci.o 00:45:21.953 CC app/trace_record/trace_record.o 00:45:21.953 CXX app/trace/trace.o 00:45:22.211 CC app/iscsi_tgt/iscsi_tgt.o 00:45:22.211 CC app/nvmf_tgt/nvmf_main.o 00:45:22.211 CC app/spdk_tgt/spdk_tgt.o 00:45:22.211 CC examples/util/zipf/zipf.o 00:45:22.211 CC examples/ioat/perf/perf.o 00:45:22.211 CC test/thread/poller_perf/poller_perf.o 00:45:22.211 LINK spdk_lspci 00:45:22.211 LINK iscsi_tgt 00:45:22.211 LINK nvmf_tgt 00:45:22.211 LINK spdk_trace_record 00:45:22.212 LINK zipf 00:45:22.212 LINK poller_perf 00:45:22.212 LINK spdk_tgt 00:45:22.470 LINK ioat_perf 00:45:22.470 LINK spdk_trace 00:45:22.470 TEST_HEADER include/spdk/accel.h 00:45:22.470 TEST_HEADER include/spdk/accel_module.h 00:45:22.470 TEST_HEADER include/spdk/assert.h 00:45:22.470 TEST_HEADER include/spdk/barrier.h 00:45:22.470 TEST_HEADER include/spdk/base64.h 00:45:22.470 TEST_HEADER include/spdk/bdev.h 00:45:22.470 TEST_HEADER include/spdk/bdev_module.h 00:45:22.470 TEST_HEADER include/spdk/bdev_zone.h 00:45:22.470 TEST_HEADER include/spdk/bit_array.h 00:45:22.470 CC test/dma/test_dma/test_dma.o 00:45:22.470 CC examples/interrupt_tgt/interrupt_tgt.o 00:45:22.470 TEST_HEADER include/spdk/bit_pool.h 00:45:22.470 TEST_HEADER include/spdk/blob_bdev.h 00:45:22.470 TEST_HEADER include/spdk/blobfs_bdev.h 00:45:22.470 TEST_HEADER include/spdk/blobfs.h 00:45:22.470 TEST_HEADER include/spdk/blob.h 00:45:22.470 TEST_HEADER include/spdk/conf.h 00:45:22.470 TEST_HEADER include/spdk/config.h 00:45:22.470 TEST_HEADER include/spdk/cpuset.h 00:45:22.470 TEST_HEADER include/spdk/crc16.h 00:45:22.470 TEST_HEADER include/spdk/crc32.h 00:45:22.470 TEST_HEADER include/spdk/crc64.h 00:45:22.470 TEST_HEADER include/spdk/dif.h 00:45:22.470 TEST_HEADER include/spdk/dma.h 00:45:22.470 TEST_HEADER include/spdk/endian.h 00:45:22.470 TEST_HEADER include/spdk/env_dpdk.h 00:45:22.470 TEST_HEADER include/spdk/env.h 00:45:22.470 CC examples/ioat/verify/verify.o 00:45:22.470 TEST_HEADER include/spdk/event.h 00:45:22.470 TEST_HEADER include/spdk/fd_group.h 00:45:22.470 TEST_HEADER include/spdk/fd.h 00:45:22.470 TEST_HEADER include/spdk/file.h 00:45:22.470 TEST_HEADER include/spdk/ftl.h 00:45:22.728 TEST_HEADER include/spdk/gpt_spec.h 00:45:22.728 TEST_HEADER include/spdk/hexlify.h 00:45:22.728 TEST_HEADER include/spdk/histogram_data.h 00:45:22.728 TEST_HEADER include/spdk/idxd.h 00:45:22.728 TEST_HEADER include/spdk/idxd_spec.h 00:45:22.728 TEST_HEADER include/spdk/init.h 00:45:22.728 TEST_HEADER include/spdk/ioat.h 00:45:22.728 TEST_HEADER include/spdk/ioat_spec.h 00:45:22.728 TEST_HEADER include/spdk/iscsi_spec.h 00:45:22.728 TEST_HEADER include/spdk/json.h 00:45:22.728 TEST_HEADER include/spdk/jsonrpc.h 00:45:22.728 TEST_HEADER include/spdk/keyring.h 00:45:22.728 TEST_HEADER include/spdk/keyring_module.h 00:45:22.728 TEST_HEADER include/spdk/likely.h 00:45:22.728 TEST_HEADER include/spdk/log.h 00:45:22.728 TEST_HEADER include/spdk/lvol.h 00:45:22.728 TEST_HEADER include/spdk/memory.h 00:45:22.728 CC app/spdk_nvme_perf/perf.o 00:45:22.728 TEST_HEADER include/spdk/mmio.h 00:45:22.728 TEST_HEADER include/spdk/nbd.h 00:45:22.728 CC test/app/bdev_svc/bdev_svc.o 00:45:22.728 TEST_HEADER include/spdk/net.h 00:45:22.728 TEST_HEADER include/spdk/notify.h 00:45:22.728 TEST_HEADER include/spdk/nvme.h 00:45:22.728 TEST_HEADER include/spdk/nvme_intel.h 00:45:22.728 TEST_HEADER include/spdk/nvme_ocssd.h 00:45:22.728 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:45:22.728 TEST_HEADER include/spdk/nvme_spec.h 00:45:22.728 TEST_HEADER include/spdk/nvme_zns.h 00:45:22.728 TEST_HEADER include/spdk/nvmf_cmd.h 00:45:22.728 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:45:22.728 TEST_HEADER include/spdk/nvmf.h 00:45:22.728 TEST_HEADER include/spdk/nvmf_spec.h 00:45:22.729 TEST_HEADER include/spdk/nvmf_transport.h 00:45:22.729 TEST_HEADER include/spdk/opal.h 00:45:22.729 TEST_HEADER include/spdk/opal_spec.h 00:45:22.729 CC test/event/event_perf/event_perf.o 00:45:22.729 TEST_HEADER include/spdk/pci_ids.h 00:45:22.729 TEST_HEADER include/spdk/pipe.h 00:45:22.729 TEST_HEADER include/spdk/queue.h 00:45:22.729 CC test/event/reactor/reactor.o 00:45:22.729 TEST_HEADER include/spdk/reduce.h 00:45:22.729 TEST_HEADER include/spdk/rpc.h 00:45:22.729 TEST_HEADER include/spdk/scheduler.h 00:45:22.729 TEST_HEADER include/spdk/scsi.h 00:45:22.729 TEST_HEADER include/spdk/scsi_spec.h 00:45:22.729 TEST_HEADER include/spdk/sock.h 00:45:22.729 TEST_HEADER include/spdk/stdinc.h 00:45:22.729 TEST_HEADER include/spdk/string.h 00:45:22.729 TEST_HEADER include/spdk/thread.h 00:45:22.729 LINK interrupt_tgt 00:45:22.729 TEST_HEADER include/spdk/trace.h 00:45:22.729 TEST_HEADER include/spdk/trace_parser.h 00:45:22.729 TEST_HEADER include/spdk/tree.h 00:45:22.729 TEST_HEADER include/spdk/ublk.h 00:45:22.729 TEST_HEADER include/spdk/util.h 00:45:22.729 TEST_HEADER include/spdk/uuid.h 00:45:22.729 TEST_HEADER include/spdk/version.h 00:45:22.729 TEST_HEADER include/spdk/vfio_user_pci.h 00:45:22.729 TEST_HEADER include/spdk/vfio_user_spec.h 00:45:22.729 TEST_HEADER include/spdk/vhost.h 00:45:22.729 CC test/env/mem_callbacks/mem_callbacks.o 00:45:22.729 TEST_HEADER include/spdk/vmd.h 00:45:22.729 TEST_HEADER include/spdk/xor.h 00:45:22.729 TEST_HEADER include/spdk/zipf.h 00:45:22.729 CXX test/cpp_headers/accel.o 00:45:22.729 LINK verify 00:45:22.729 LINK bdev_svc 00:45:22.729 LINK event_perf 00:45:22.986 LINK reactor 00:45:22.986 LINK test_dma 00:45:22.986 CXX test/cpp_headers/accel_module.o 00:45:22.986 CXX test/cpp_headers/assert.o 00:45:22.986 CXX test/cpp_headers/barrier.o 00:45:22.986 CC test/event/reactor_perf/reactor_perf.o 00:45:23.245 CXX test/cpp_headers/base64.o 00:45:23.245 CC test/env/vtophys/vtophys.o 00:45:23.245 LINK reactor_perf 00:45:23.245 CC test/event/app_repeat/app_repeat.o 00:45:23.245 CC app/spdk_nvme_identify/identify.o 00:45:23.245 CC test/event/scheduler/scheduler.o 00:45:23.245 LINK mem_callbacks 00:45:23.246 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:45:23.246 CXX test/cpp_headers/bdev.o 00:45:23.246 LINK vtophys 00:45:23.504 LINK spdk_nvme_perf 00:45:23.504 LINK app_repeat 00:45:23.504 CC test/rpc_client/rpc_client_test.o 00:45:23.504 LINK scheduler 00:45:23.504 CXX test/cpp_headers/bdev_module.o 00:45:23.504 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:45:23.504 LINK rpc_client_test 00:45:23.504 CC test/env/memory/memory_ut.o 00:45:23.763 CC examples/thread/thread/thread_ex.o 00:45:23.763 LINK nvme_fuzz 00:45:23.763 CXX test/cpp_headers/bdev_zone.o 00:45:23.763 LINK env_dpdk_post_init 00:45:23.763 CC test/accel/dif/dif.o 00:45:23.763 CC test/app/histogram_perf/histogram_perf.o 00:45:23.763 CC test/env/pci/pci_ut.o 00:45:23.763 LINK thread 00:45:24.021 CXX test/cpp_headers/bit_array.o 00:45:24.021 LINK histogram_perf 00:45:24.021 LINK spdk_nvme_identify 00:45:24.021 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:45:24.021 CC app/spdk_nvme_discover/discovery_aer.o 00:45:24.022 CXX test/cpp_headers/bit_pool.o 00:45:24.022 CXX test/cpp_headers/blob_bdev.o 00:45:24.280 CC app/spdk_top/spdk_top.o 00:45:24.280 LINK dif 00:45:24.280 LINK spdk_nvme_discover 00:45:24.280 LINK pci_ut 00:45:24.280 CXX test/cpp_headers/blobfs_bdev.o 00:45:24.280 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:45:24.280 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:45:24.538 CC test/blobfs/mkfs/mkfs.o 00:45:24.538 CXX test/cpp_headers/blobfs.o 00:45:24.538 CC test/app/jsoncat/jsoncat.o 00:45:24.538 LINK memory_ut 00:45:24.538 CC test/app/stub/stub.o 00:45:24.798 LINK jsoncat 00:45:24.798 CXX test/cpp_headers/blob.o 00:45:24.798 LINK mkfs 00:45:24.798 CC examples/sock/hello_world/hello_sock.o 00:45:24.798 CXX test/cpp_headers/conf.o 00:45:24.798 LINK vhost_fuzz 00:45:24.798 LINK stub 00:45:25.057 LINK spdk_top 00:45:25.057 LINK hello_sock 00:45:25.057 CXX test/cpp_headers/config.o 00:45:25.057 CXX test/cpp_headers/cpuset.o 00:45:25.057 CC app/vhost/vhost.o 00:45:25.057 CC app/spdk_dd/spdk_dd.o 00:45:25.057 CC test/lvol/esnap/esnap.o 00:45:25.057 CXX test/cpp_headers/crc16.o 00:45:25.057 CC examples/vmd/lsvmd/lsvmd.o 00:45:25.314 CC app/fio/nvme/fio_plugin.o 00:45:25.314 LINK vhost 00:45:25.314 LINK lsvmd 00:45:25.314 CC app/fio/bdev/fio_plugin.o 00:45:25.314 CC examples/vmd/led/led.o 00:45:25.314 CXX test/cpp_headers/crc32.o 00:45:25.572 LINK spdk_dd 00:45:25.572 LINK led 00:45:25.572 LINK iscsi_fuzz 00:45:25.572 CXX test/cpp_headers/crc64.o 00:45:25.830 CXX test/cpp_headers/dif.o 00:45:25.830 CC test/nvme/aer/aer.o 00:45:25.830 CC examples/idxd/perf/perf.o 00:45:25.830 LINK spdk_bdev 00:45:25.830 LINK spdk_nvme 00:45:25.830 CC test/nvme/reset/reset.o 00:45:26.088 CXX test/cpp_headers/dma.o 00:45:26.088 CC test/bdev/bdevio/bdevio.o 00:45:26.088 CC test/nvme/sgl/sgl.o 00:45:26.088 LINK aer 00:45:26.088 LINK idxd_perf 00:45:26.088 CXX test/cpp_headers/endian.o 00:45:26.088 CC test/nvme/e2edp/nvme_dp.o 00:45:26.088 CC test/nvme/overhead/overhead.o 00:45:26.088 LINK reset 00:45:26.346 CXX test/cpp_headers/env_dpdk.o 00:45:26.346 LINK sgl 00:45:26.346 CC test/nvme/err_injection/err_injection.o 00:45:26.346 LINK nvme_dp 00:45:26.346 LINK overhead 00:45:26.346 CXX test/cpp_headers/env.o 00:45:26.605 CC test/nvme/startup/startup.o 00:45:26.605 CC examples/accel/perf/accel_perf.o 00:45:26.605 LINK bdevio 00:45:26.605 CXX test/cpp_headers/event.o 00:45:26.605 LINK err_injection 00:45:26.605 LINK startup 00:45:26.605 CC test/nvme/reserve/reserve.o 00:45:26.862 CXX test/cpp_headers/fd_group.o 00:45:26.862 CC test/nvme/simple_copy/simple_copy.o 00:45:26.862 CC test/nvme/connect_stress/connect_stress.o 00:45:26.862 CXX test/cpp_headers/fd.o 00:45:26.862 CC examples/blob/hello_world/hello_blob.o 00:45:26.862 LINK reserve 00:45:26.862 LINK accel_perf 00:45:26.862 CC examples/nvme/hello_world/hello_world.o 00:45:26.862 LINK simple_copy 00:45:26.862 CC examples/nvme/reconnect/reconnect.o 00:45:26.862 LINK connect_stress 00:45:27.120 CXX test/cpp_headers/file.o 00:45:27.120 CXX test/cpp_headers/ftl.o 00:45:27.120 CXX test/cpp_headers/gpt_spec.o 00:45:27.120 LINK hello_blob 00:45:27.120 CXX test/cpp_headers/hexlify.o 00:45:27.120 LINK hello_world 00:45:27.120 CC test/nvme/boot_partition/boot_partition.o 00:45:27.120 CC test/nvme/compliance/nvme_compliance.o 00:45:27.379 CXX test/cpp_headers/histogram_data.o 00:45:27.379 LINK reconnect 00:45:27.379 CXX test/cpp_headers/idxd.o 00:45:27.379 CXX test/cpp_headers/idxd_spec.o 00:45:27.379 LINK boot_partition 00:45:27.379 CC examples/blob/cli/blobcli.o 00:45:27.379 CC examples/bdev/hello_world/hello_bdev.o 00:45:27.379 CXX test/cpp_headers/init.o 00:45:27.379 CXX test/cpp_headers/ioat.o 00:45:27.379 CXX test/cpp_headers/ioat_spec.o 00:45:27.637 CC examples/nvme/nvme_manage/nvme_manage.o 00:45:27.638 LINK nvme_compliance 00:45:27.638 CC examples/bdev/bdevperf/bdevperf.o 00:45:27.638 CXX test/cpp_headers/iscsi_spec.o 00:45:27.638 LINK hello_bdev 00:45:27.638 CC examples/nvme/arbitration/arbitration.o 00:45:27.897 LINK blobcli 00:45:27.897 CC examples/nvme/hotplug/hotplug.o 00:45:27.897 CXX test/cpp_headers/json.o 00:45:27.897 CC test/nvme/fused_ordering/fused_ordering.o 00:45:27.897 CXX test/cpp_headers/jsonrpc.o 00:45:27.897 LINK nvme_manage 00:45:27.897 LINK hotplug 00:45:28.156 LINK fused_ordering 00:45:28.156 CC examples/nvme/cmb_copy/cmb_copy.o 00:45:28.156 LINK arbitration 00:45:28.156 CC examples/nvme/abort/abort.o 00:45:28.156 CXX test/cpp_headers/keyring.o 00:45:28.156 CC test/nvme/doorbell_aers/doorbell_aers.o 00:45:28.156 CXX test/cpp_headers/keyring_module.o 00:45:28.156 CC test/nvme/fdp/fdp.o 00:45:28.156 LINK cmb_copy 00:45:28.156 CXX test/cpp_headers/likely.o 00:45:28.156 LINK bdevperf 00:45:28.156 CC test/nvme/cuse/cuse.o 00:45:28.415 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:45:28.415 CXX test/cpp_headers/log.o 00:45:28.415 LINK doorbell_aers 00:45:28.415 CXX test/cpp_headers/lvol.o 00:45:28.415 CXX test/cpp_headers/memory.o 00:45:28.415 LINK abort 00:45:28.415 CXX test/cpp_headers/mmio.o 00:45:28.415 LINK fdp 00:45:28.415 LINK pmr_persistence 00:45:28.415 CXX test/cpp_headers/nbd.o 00:45:28.415 CXX test/cpp_headers/net.o 00:45:28.415 CXX test/cpp_headers/notify.o 00:45:28.415 CXX test/cpp_headers/nvme.o 00:45:28.415 CXX test/cpp_headers/nvme_intel.o 00:45:28.675 CXX test/cpp_headers/nvme_ocssd.o 00:45:28.675 CXX test/cpp_headers/nvme_ocssd_spec.o 00:45:28.675 CXX test/cpp_headers/nvme_spec.o 00:45:28.675 CXX test/cpp_headers/nvme_zns.o 00:45:28.675 CXX test/cpp_headers/nvmf_cmd.o 00:45:28.675 CXX test/cpp_headers/nvmf_fc_spec.o 00:45:28.675 CXX test/cpp_headers/nvmf.o 00:45:28.675 CXX test/cpp_headers/nvmf_spec.o 00:45:28.675 CXX test/cpp_headers/nvmf_transport.o 00:45:28.675 CXX test/cpp_headers/opal.o 00:45:28.934 CXX test/cpp_headers/opal_spec.o 00:45:28.934 CC examples/nvmf/nvmf/nvmf.o 00:45:28.934 CXX test/cpp_headers/pci_ids.o 00:45:28.934 CXX test/cpp_headers/pipe.o 00:45:28.934 CXX test/cpp_headers/queue.o 00:45:28.934 CXX test/cpp_headers/reduce.o 00:45:28.934 CXX test/cpp_headers/rpc.o 00:45:28.934 CXX test/cpp_headers/scheduler.o 00:45:28.934 CXX test/cpp_headers/scsi.o 00:45:28.934 CXX test/cpp_headers/scsi_spec.o 00:45:28.934 CXX test/cpp_headers/sock.o 00:45:28.934 CXX test/cpp_headers/stdinc.o 00:45:28.934 CXX test/cpp_headers/string.o 00:45:28.934 CXX test/cpp_headers/thread.o 00:45:29.193 LINK nvmf 00:45:29.193 CXX test/cpp_headers/trace.o 00:45:29.193 CXX test/cpp_headers/trace_parser.o 00:45:29.193 CXX test/cpp_headers/tree.o 00:45:29.193 CXX test/cpp_headers/ublk.o 00:45:29.193 CXX test/cpp_headers/util.o 00:45:29.193 CXX test/cpp_headers/uuid.o 00:45:29.193 CXX test/cpp_headers/version.o 00:45:29.193 CXX test/cpp_headers/vfio_user_pci.o 00:45:29.193 CXX test/cpp_headers/vfio_user_spec.o 00:45:29.193 CXX test/cpp_headers/vhost.o 00:45:29.193 CXX test/cpp_headers/vmd.o 00:45:29.193 CXX test/cpp_headers/xor.o 00:45:29.193 CXX test/cpp_headers/zipf.o 00:45:29.193 LINK cuse 00:45:29.454 LINK esnap 00:45:29.713 00:45:29.713 real 0m49.423s 00:45:29.713 user 4m15.277s 00:45:29.713 sys 1m11.630s 00:45:29.713 10:40:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:45:29.713 10:40:37 make -- common/autotest_common.sh@10 -- $ set +x 00:45:29.713 ************************************ 00:45:29.713 END TEST make 00:45:29.713 ************************************ 00:45:29.974 10:40:37 -- common/autotest_common.sh@1142 -- $ return 0 00:45:29.974 10:40:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:45:29.974 10:40:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:45:29.974 10:40:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:45:29.974 10:40:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:29.974 10:40:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:45:29.974 10:40:37 -- pm/common@44 -- $ pid=5874 00:45:29.974 10:40:37 -- pm/common@50 -- $ kill -TERM 5874 00:45:29.974 10:40:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:29.974 10:40:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:45:29.974 10:40:37 -- pm/common@44 -- $ pid=5876 00:45:29.974 10:40:37 -- pm/common@50 -- $ kill -TERM 5876 00:45:29.974 10:40:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:29.974 10:40:37 -- nvmf/common.sh@7 -- # uname -s 00:45:29.974 10:40:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:29.974 10:40:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:29.974 10:40:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:29.974 10:40:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:29.974 10:40:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:29.974 10:40:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:29.974 10:40:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:29.974 10:40:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:29.974 10:40:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:29.974 10:40:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:29.974 10:40:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:45:29.974 10:40:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:45:29.974 10:40:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:29.974 10:40:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:29.974 10:40:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:29.974 10:40:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:29.974 10:40:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:29.974 10:40:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:29.974 10:40:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:29.974 10:40:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:29.974 10:40:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:29.974 10:40:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:29.974 10:40:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:29.974 10:40:37 -- paths/export.sh@5 -- # export PATH 00:45:29.974 10:40:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:29.974 10:40:37 -- nvmf/common.sh@47 -- # : 0 00:45:29.974 10:40:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:29.974 10:40:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:29.974 10:40:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:29.974 10:40:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:29.974 10:40:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:29.974 10:40:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:29.974 10:40:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:29.974 10:40:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:29.974 10:40:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:45:29.974 10:40:37 -- spdk/autotest.sh@32 -- # uname -s 00:45:29.974 10:40:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:45:29.974 10:40:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:45:29.974 10:40:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:45:29.974 10:40:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:45:29.974 10:40:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:45:29.974 10:40:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:45:29.974 10:40:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:45:29.974 10:40:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:45:30.234 10:40:37 -- spdk/autotest.sh@48 -- # udevadm_pid=67766 00:45:30.234 10:40:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:45:30.234 10:40:37 -- pm/common@17 -- # local monitor 00:45:30.234 10:40:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:45:30.234 10:40:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:45:30.234 10:40:37 -- pm/common@25 -- # sleep 1 00:45:30.234 10:40:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:45:30.234 10:40:37 -- pm/common@21 -- # date +%s 00:45:30.234 10:40:37 -- pm/common@21 -- # date +%s 00:45:30.234 10:40:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721644837 00:45:30.234 10:40:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721644837 00:45:30.234 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721644837_collect-cpu-load.pm.log 00:45:30.234 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721644837_collect-vmstat.pm.log 00:45:31.173 10:40:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:45:31.173 10:40:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:45:31.173 10:40:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:31.173 10:40:38 -- common/autotest_common.sh@10 -- # set +x 00:45:31.173 10:40:38 -- spdk/autotest.sh@59 -- # create_test_list 00:45:31.173 10:40:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:45:31.173 10:40:38 -- common/autotest_common.sh@10 -- # set +x 00:45:31.173 10:40:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:45:31.173 10:40:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:45:31.173 10:40:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:45:31.173 10:40:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:45:31.173 10:40:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:45:31.173 10:40:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:45:31.173 10:40:38 -- common/autotest_common.sh@1455 -- # uname 00:45:31.173 10:40:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:45:31.173 10:40:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:45:31.173 10:40:38 -- common/autotest_common.sh@1475 -- # uname 00:45:31.173 10:40:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:45:31.173 10:40:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:45:31.173 10:40:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:45:31.173 10:40:39 -- spdk/autotest.sh@72 -- # hash lcov 00:45:31.173 10:40:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:45:31.173 10:40:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:45:31.173 --rc lcov_branch_coverage=1 00:45:31.173 --rc lcov_function_coverage=1 00:45:31.173 --rc genhtml_branch_coverage=1 00:45:31.173 --rc genhtml_function_coverage=1 00:45:31.173 --rc genhtml_legend=1 00:45:31.173 --rc geninfo_all_blocks=1 00:45:31.173 ' 00:45:31.173 10:40:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:45:31.173 --rc lcov_branch_coverage=1 00:45:31.173 --rc lcov_function_coverage=1 00:45:31.173 --rc genhtml_branch_coverage=1 00:45:31.173 --rc genhtml_function_coverage=1 00:45:31.173 --rc genhtml_legend=1 00:45:31.173 --rc geninfo_all_blocks=1 00:45:31.173 ' 00:45:31.173 10:40:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:45:31.173 --rc lcov_branch_coverage=1 00:45:31.173 --rc lcov_function_coverage=1 00:45:31.173 --rc genhtml_branch_coverage=1 00:45:31.173 --rc genhtml_function_coverage=1 00:45:31.173 --rc genhtml_legend=1 00:45:31.173 --rc geninfo_all_blocks=1 00:45:31.173 --no-external' 00:45:31.173 10:40:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:45:31.173 --rc lcov_branch_coverage=1 00:45:31.173 --rc lcov_function_coverage=1 00:45:31.173 --rc genhtml_branch_coverage=1 00:45:31.173 --rc genhtml_function_coverage=1 00:45:31.173 --rc genhtml_legend=1 00:45:31.173 --rc geninfo_all_blocks=1 00:45:31.173 --no-external' 00:45:31.173 10:40:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:45:31.173 lcov: LCOV version 1.14 00:45:31.173 10:40:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:45:46.287 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:45:46.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:45:56.272 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:45:56.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:45:56.530 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:45:56.530 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:45:56.789 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:45:56.789 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:46:00.130 10:41:07 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:46:00.130 10:41:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:00.130 10:41:07 -- common/autotest_common.sh@10 -- # set +x 00:46:00.130 10:41:07 -- spdk/autotest.sh@91 -- # rm -f 00:46:00.130 10:41:07 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:00.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:00.647 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:46:00.647 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:46:00.647 10:41:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:46:00.647 10:41:08 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:46:00.647 10:41:08 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:46:00.647 10:41:08 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:46:00.647 10:41:08 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:00.647 10:41:08 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:46:00.647 10:41:08 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:46:00.647 10:41:08 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:00.647 10:41:08 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:46:00.647 10:41:08 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:46:00.647 10:41:08 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:00.647 10:41:08 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:46:00.647 10:41:08 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:46:00.647 10:41:08 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:00.647 10:41:08 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:46:00.647 10:41:08 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:46:00.647 10:41:08 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:46:00.647 10:41:08 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:00.647 10:41:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:46:00.647 10:41:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:46:00.647 10:41:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:46:00.647 10:41:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:46:00.647 10:41:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:46:00.647 10:41:08 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:46:00.647 No valid GPT data, bailing 00:46:00.647 10:41:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:00.647 10:41:08 -- scripts/common.sh@391 -- # pt= 00:46:00.647 10:41:08 -- scripts/common.sh@392 -- # return 1 00:46:00.647 10:41:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:46:00.647 1+0 records in 00:46:00.647 1+0 records out 00:46:00.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416651 s, 252 MB/s 00:46:00.647 10:41:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:46:00.647 10:41:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:46:00.647 10:41:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:46:00.647 10:41:08 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:46:00.647 10:41:08 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:46:00.647 No valid GPT data, bailing 00:46:00.647 10:41:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:46:00.647 10:41:08 -- scripts/common.sh@391 -- # pt= 00:46:00.647 10:41:08 -- scripts/common.sh@392 -- # return 1 00:46:00.647 10:41:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:46:00.647 1+0 records in 00:46:00.647 1+0 records out 00:46:00.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593614 s, 177 MB/s 00:46:00.647 10:41:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:46:00.647 10:41:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:46:00.647 10:41:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:46:00.647 10:41:08 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:46:00.647 10:41:08 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:46:00.906 No valid GPT data, bailing 00:46:00.906 10:41:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:46:00.906 10:41:08 -- scripts/common.sh@391 -- # pt= 00:46:00.906 10:41:08 -- scripts/common.sh@392 -- # return 1 00:46:00.906 10:41:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:46:00.906 1+0 records in 00:46:00.906 1+0 records out 00:46:00.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389805 s, 269 MB/s 00:46:00.906 10:41:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:46:00.906 10:41:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:46:00.906 10:41:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:46:00.906 10:41:08 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:46:00.906 10:41:08 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:46:00.906 No valid GPT data, bailing 00:46:00.906 10:41:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:46:00.906 10:41:08 -- scripts/common.sh@391 -- # pt= 00:46:00.906 10:41:08 -- scripts/common.sh@392 -- # return 1 00:46:00.906 10:41:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:46:00.906 1+0 records in 00:46:00.906 1+0 records out 00:46:00.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459918 s, 228 MB/s 00:46:00.906 10:41:08 -- spdk/autotest.sh@118 -- # sync 00:46:01.164 10:41:09 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:46:01.164 10:41:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:46:01.164 10:41:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:46:04.449 10:41:11 -- spdk/autotest.sh@124 -- # uname -s 00:46:04.449 10:41:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:46:04.449 10:41:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:46:04.449 10:41:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:04.449 10:41:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:04.449 10:41:11 -- common/autotest_common.sh@10 -- # set +x 00:46:04.449 ************************************ 00:46:04.449 START TEST setup.sh 00:46:04.449 ************************************ 00:46:04.449 10:41:11 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:46:04.449 * Looking for test storage... 00:46:04.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:46:04.449 10:41:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:46:04.449 10:41:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:46:04.449 10:41:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:46:04.449 10:41:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:04.449 10:41:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:04.449 10:41:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:46:04.449 ************************************ 00:46:04.449 START TEST acl 00:46:04.449 ************************************ 00:46:04.449 10:41:11 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:46:04.449 * Looking for test storage... 00:46:04.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:46:04.449 10:41:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:04.449 10:41:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:46:04.450 10:41:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:46:04.450 10:41:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:46:04.450 10:41:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:04.450 10:41:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:46:04.450 10:41:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:46:04.450 10:41:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:46:04.450 10:41:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:46:04.450 10:41:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:46:04.450 10:41:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:46:04.450 10:41:12 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:05.383 10:41:12 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:46:05.383 10:41:12 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:46:05.383 10:41:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:05.383 10:41:12 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:46:05.383 10:41:12 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:46:05.383 10:41:12 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:05.949 Hugepages 00:46:05.949 node hugesize free / total 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:05.949 00:46:05.949 Type BDF Vendor Device NUMA Driver Device Block devices 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:46:05.949 10:41:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:06.207 10:41:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:46:06.207 10:41:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:46:06.207 10:41:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:46:06.207 10:41:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:46:06.207 10:41:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:46:06.465 10:41:14 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:46:06.465 10:41:14 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:46:06.465 10:41:14 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:06.465 10:41:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:06.465 10:41:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:46:06.465 ************************************ 00:46:06.465 START TEST denied 00:46:06.465 ************************************ 00:46:06.465 10:41:14 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:46:06.465 10:41:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:46:06.465 10:41:14 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:46:06.465 10:41:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:46:06.465 10:41:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:46:06.465 10:41:14 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:07.399 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:46:07.399 10:41:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:08.334 ************************************ 00:46:08.334 END TEST denied 00:46:08.334 ************************************ 00:46:08.334 00:46:08.334 real 0m1.811s 00:46:08.334 user 0m0.665s 00:46:08.334 sys 0m1.144s 00:46:08.334 10:41:15 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:08.334 10:41:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:46:08.334 10:41:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:46:08.334 10:41:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:46:08.334 10:41:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:08.334 10:41:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:08.334 10:41:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:46:08.334 ************************************ 00:46:08.334 START TEST allowed 00:46:08.334 ************************************ 00:46:08.334 10:41:16 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:46:08.334 10:41:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:46:08.334 10:41:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:46:08.334 10:41:16 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:46:08.334 10:41:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:46:08.334 10:41:16 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:09.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:46:09.281 10:41:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:10.216 00:46:10.216 real 0m1.922s 00:46:10.216 user 0m0.756s 00:46:10.216 sys 0m1.185s 00:46:10.216 10:41:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:10.216 10:41:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:46:10.216 ************************************ 00:46:10.216 END TEST allowed 00:46:10.216 ************************************ 00:46:10.216 10:41:18 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:46:10.216 ************************************ 00:46:10.216 END TEST acl 00:46:10.216 ************************************ 00:46:10.216 00:46:10.216 real 0m6.099s 00:46:10.216 user 0m2.343s 00:46:10.216 sys 0m3.793s 00:46:10.216 10:41:18 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:10.216 10:41:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:46:10.216 10:41:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:46:10.216 10:41:18 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:46:10.216 10:41:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:10.216 10:41:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:10.216 10:41:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:46:10.216 ************************************ 00:46:10.216 START TEST hugepages 00:46:10.216 ************************************ 00:46:10.216 10:41:18 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:46:10.475 * Looking for test storage... 00:46:10.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4439248 kB' 'MemAvailable: 7380136 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 483388 kB' 'Inactive: 2771528 kB' 'Active(anon): 121180 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 112792 kB' 'Mapped: 48776 kB' 'Shmem: 10492 kB' 'KReclaimable: 88628 kB' 'Slab: 171372 kB' 'SReclaimable: 88628 kB' 'SUnreclaim: 82744 kB' 'KernelStack: 6476 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 345056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.475 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:46:10.476 10:41:18 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:46:10.476 10:41:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:10.476 10:41:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:10.476 10:41:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:10.476 ************************************ 00:46:10.476 START TEST default_setup 00:46:10.476 ************************************ 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:10.476 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:46:10.477 10:41:18 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:11.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:11.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.413 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6464872 kB' 'MemAvailable: 9405604 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 494872 kB' 'Inactive: 2771544 kB' 'Active(anon): 132664 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123768 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171096 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82808 kB' 'KernelStack: 6464 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.413 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.414 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.414 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.414 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.414 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.684 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6464936 kB' 'MemAvailable: 9405668 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 494696 kB' 'Inactive: 2771544 kB' 'Active(anon): 132488 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123608 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171096 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82808 kB' 'KernelStack: 6464 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.685 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.686 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6465204 kB' 'MemAvailable: 9405936 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 494836 kB' 'Inactive: 2771544 kB' 'Active(anon): 132628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123752 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171084 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82796 kB' 'KernelStack: 6448 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.687 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.688 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:46:11.689 nr_hugepages=1024 00:46:11.689 resv_hugepages=0 00:46:11.689 surplus_hugepages=0 00:46:11.689 anon_hugepages=0 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6465460 kB' 'MemAvailable: 9406192 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 494832 kB' 'Inactive: 2771544 kB' 'Active(anon): 132624 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123732 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171084 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82796 kB' 'KernelStack: 6448 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.689 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.690 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.691 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6465460 kB' 'MemUsed: 5776512 kB' 'SwapCached: 0 kB' 'Active: 494836 kB' 'Inactive: 2771544 kB' 'Active(anon): 132628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 3144224 kB' 'Mapped: 48804 kB' 'AnonPages: 123728 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88288 kB' 'Slab: 171080 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.692 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:11.693 node0=1024 expecting 1024 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:46:11.693 00:46:11.693 real 0m1.221s 00:46:11.693 user 0m0.535s 00:46:11.693 sys 0m0.633s 00:46:11.693 ************************************ 00:46:11.693 END TEST default_setup 00:46:11.693 ************************************ 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:11.693 10:41:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:46:11.693 10:41:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:46:11.693 10:41:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:46:11.693 10:41:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:11.693 10:41:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:11.693 10:41:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:11.693 ************************************ 00:46:11.693 START TEST per_node_1G_alloc 00:46:11.693 ************************************ 00:46:11.693 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:46:11.694 10:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:12.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:12.265 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:12.265 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511520 kB' 'MemAvailable: 10452256 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 495108 kB' 'Inactive: 2771548 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124012 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171192 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82904 kB' 'KernelStack: 6436 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.265 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.266 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511520 kB' 'MemAvailable: 10452256 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 494796 kB' 'Inactive: 2771548 kB' 'Active(anon): 132588 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123740 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171188 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82900 kB' 'KernelStack: 6448 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.267 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.268 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511520 kB' 'MemAvailable: 10452256 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 495056 kB' 'Inactive: 2771548 kB' 'Active(anon): 132848 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124000 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171188 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82900 kB' 'KernelStack: 6448 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.532 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.533 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:46:12.534 nr_hugepages=512 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:46:12.534 resv_hugepages=0 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:12.534 surplus_hugepages=0 00:46:12.534 anon_hugepages=0 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511520 kB' 'MemAvailable: 10452256 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 494976 kB' 'Inactive: 2771548 kB' 'Active(anon): 132768 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123912 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171184 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82896 kB' 'KernelStack: 6464 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.534 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.535 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7520868 kB' 'MemUsed: 4721104 kB' 'SwapCached: 0 kB' 'Active: 494700 kB' 'Inactive: 2771548 kB' 'Active(anon): 132492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 3144224 kB' 'Mapped: 48804 kB' 'AnonPages: 123648 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88288 kB' 'Slab: 171180 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.536 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:46:12.537 node0=512 expecting 512 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:46:12.537 00:46:12.537 real 0m0.730s 00:46:12.537 user 0m0.313s 00:46:12.537 sys 0m0.448s 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:12.537 10:41:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:46:12.537 ************************************ 00:46:12.537 END TEST per_node_1G_alloc 00:46:12.537 ************************************ 00:46:12.537 10:41:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:46:12.537 10:41:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:46:12.537 10:41:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:12.537 10:41:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:12.537 10:41:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:12.537 ************************************ 00:46:12.537 START TEST even_2G_alloc 00:46:12.537 ************************************ 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:46:12.537 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:13.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:13.110 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:13.110 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.110 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6477632 kB' 'MemAvailable: 9418368 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 495576 kB' 'Inactive: 2771548 kB' 'Active(anon): 133368 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124520 kB' 'Mapped: 49092 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171188 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82900 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.111 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6477452 kB' 'MemAvailable: 9418188 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 495120 kB' 'Inactive: 2771548 kB' 'Active(anon): 132912 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124044 kB' 'Mapped: 49004 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171200 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82912 kB' 'KernelStack: 6476 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.112 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.113 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6477452 kB' 'MemAvailable: 9418188 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 495356 kB' 'Inactive: 2771548 kB' 'Active(anon): 133148 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124264 kB' 'Mapped: 49004 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171200 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82912 kB' 'KernelStack: 6476 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.114 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:46:13.115 nr_hugepages=1024 00:46:13.115 resv_hugepages=0 00:46:13.115 surplus_hugepages=0 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:46:13.115 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:46:13.116 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:13.116 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:13.116 anon_hugepages=0 00:46:13.116 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:13.116 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:13.116 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6477772 kB' 'MemAvailable: 9418508 kB' 'Buffers: 2436 kB' 'Cached: 3141788 kB' 'SwapCached: 0 kB' 'Active: 495128 kB' 'Inactive: 2771548 kB' 'Active(anon): 132920 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 49004 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171200 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82912 kB' 'KernelStack: 6476 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.377 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.378 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6477016 kB' 'MemUsed: 5764956 kB' 'SwapCached: 0 kB' 'Active: 495324 kB' 'Inactive: 2771548 kB' 'Active(anon): 133116 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 3144224 kB' 'Mapped: 49004 kB' 'AnonPages: 124328 kB' 'Shmem: 10468 kB' 'KernelStack: 6508 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88288 kB' 'Slab: 171204 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.379 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:13.380 node0=1024 expecting 1024 00:46:13.380 ************************************ 00:46:13.380 END TEST even_2G_alloc 00:46:13.380 ************************************ 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:46:13.380 00:46:13.380 real 0m0.740s 00:46:13.380 user 0m0.331s 00:46:13.380 sys 0m0.434s 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:13.380 10:41:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:46:13.380 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:46:13.380 10:41:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:46:13.380 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:13.380 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:13.380 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:13.380 ************************************ 00:46:13.380 START TEST odd_alloc 00:46:13.380 ************************************ 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:46:13.380 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:13.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:13.951 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:13.951 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6487340 kB' 'MemAvailable: 9428080 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 494960 kB' 'Inactive: 2771552 kB' 'Active(anon): 132752 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 48984 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171244 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82956 kB' 'KernelStack: 6544 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.951 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.952 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6488020 kB' 'MemAvailable: 9428760 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 495128 kB' 'Inactive: 2771552 kB' 'Active(anon): 132920 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123832 kB' 'Mapped: 48984 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171232 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82944 kB' 'KernelStack: 6512 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.953 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.954 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6488020 kB' 'MemAvailable: 9428760 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 494836 kB' 'Inactive: 2771552 kB' 'Active(anon): 132628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123764 kB' 'Mapped: 48864 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171232 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82944 kB' 'KernelStack: 6480 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.955 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:46:13.956 nr_hugepages=1025 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:46:13.956 resv_hugepages=0 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:13.956 surplus_hugepages=0 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:13.956 anon_hugepages=0 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.956 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6487516 kB' 'MemAvailable: 9428256 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 495028 kB' 'Inactive: 2771552 kB' 'Active(anon): 132820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123956 kB' 'Mapped: 48864 kB' 'Shmem: 10468 kB' 'KReclaimable: 88288 kB' 'Slab: 171232 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82944 kB' 'KernelStack: 6464 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.957 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:13.958 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.218 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6487516 kB' 'MemUsed: 5754456 kB' 'SwapCached: 0 kB' 'Active: 494624 kB' 'Inactive: 2771552 kB' 'Active(anon): 132416 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 3144228 kB' 'Mapped: 48864 kB' 'AnonPages: 123560 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88288 kB' 'Slab: 171232 kB' 'SReclaimable: 88288 kB' 'SUnreclaim: 82944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.219 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:46:14.220 node0=1025 expecting 1025 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:46:14.220 00:46:14.220 real 0m0.713s 00:46:14.220 user 0m0.289s 00:46:14.220 sys 0m0.462s 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:14.220 10:41:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:46:14.220 ************************************ 00:46:14.220 END TEST odd_alloc 00:46:14.220 ************************************ 00:46:14.220 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:46:14.220 10:41:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:46:14.220 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:14.220 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:14.220 10:41:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:14.220 ************************************ 00:46:14.220 START TEST custom_alloc 00:46:14.220 ************************************ 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:46:14.220 10:41:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:14.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:14.794 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:14.794 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:46:14.794 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549116 kB' 'MemAvailable: 10489856 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 490028 kB' 'Inactive: 2771552 kB' 'Active(anon): 127820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118664 kB' 'Mapped: 48176 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 171004 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82720 kB' 'KernelStack: 6364 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.795 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549716 kB' 'MemAvailable: 10490456 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489796 kB' 'Inactive: 2771552 kB' 'Active(anon): 127588 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 48064 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 171004 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82720 kB' 'KernelStack: 6348 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.796 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.797 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549716 kB' 'MemAvailable: 10490456 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489760 kB' 'Inactive: 2771552 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118716 kB' 'Mapped: 48064 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 171004 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82720 kB' 'KernelStack: 6348 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.798 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.799 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:46:14.800 nr_hugepages=512 00:46:14.800 resv_hugepages=0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:14.800 surplus_hugepages=0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:14.800 anon_hugepages=0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549716 kB' 'MemAvailable: 10490456 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489700 kB' 'Inactive: 2771552 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 48064 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 171004 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82720 kB' 'KernelStack: 6332 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.800 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:14.801 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549716 kB' 'MemUsed: 4692256 kB' 'SwapCached: 0 kB' 'Active: 489712 kB' 'Inactive: 2771552 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 3144228 kB' 'Mapped: 48064 kB' 'AnonPages: 118668 kB' 'Shmem: 10468 kB' 'KernelStack: 6332 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88284 kB' 'Slab: 171004 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.802 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:46:14.803 node0=512 expecting 512 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:46:14.803 00:46:14.803 real 0m0.747s 00:46:14.803 user 0m0.357s 00:46:14.803 sys 0m0.441s 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:14.803 10:41:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:46:14.803 ************************************ 00:46:14.803 END TEST custom_alloc 00:46:14.803 ************************************ 00:46:15.062 10:41:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:46:15.062 10:41:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:46:15.062 10:41:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:15.062 10:41:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:15.062 10:41:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:15.062 ************************************ 00:46:15.062 START TEST no_shrink_alloc 00:46:15.062 ************************************ 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:46:15.062 10:41:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:15.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:15.633 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:15.633 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.633 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6550088 kB' 'MemAvailable: 9490828 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489944 kB' 'Inactive: 2771552 kB' 'Active(anon): 127736 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118604 kB' 'Mapped: 48176 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170960 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82676 kB' 'KernelStack: 6352 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.634 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6549836 kB' 'MemAvailable: 9490576 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489484 kB' 'Inactive: 2771552 kB' 'Active(anon): 127276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118412 kB' 'Mapped: 48064 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170960 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82676 kB' 'KernelStack: 6336 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.635 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6549836 kB' 'MemAvailable: 9490576 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489528 kB' 'Inactive: 2771552 kB' 'Active(anon): 127320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118460 kB' 'Mapped: 48064 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170960 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82676 kB' 'KernelStack: 6352 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.636 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:46:15.637 nr_hugepages=1024 00:46:15.637 resv_hugepages=0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:15.637 surplus_hugepages=0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:15.637 anon_hugepages=0 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:15.637 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6549836 kB' 'MemAvailable: 9490576 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489684 kB' 'Inactive: 2771552 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118612 kB' 'Mapped: 48064 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170960 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82676 kB' 'KernelStack: 6336 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.638 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6549836 kB' 'MemUsed: 5692136 kB' 'SwapCached: 0 kB' 'Active: 489632 kB' 'Inactive: 2771552 kB' 'Active(anon): 127424 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 3144228 kB' 'Mapped: 48064 kB' 'AnonPages: 118548 kB' 'Shmem: 10468 kB' 'KernelStack: 6320 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88284 kB' 'Slab: 170960 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.639 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:46:15.640 node0=1024 expecting 1024 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:46:15.640 10:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:16.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:16.211 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:16.211 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:16.211 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6554024 kB' 'MemAvailable: 9494764 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 490252 kB' 'Inactive: 2771552 kB' 'Active(anon): 128044 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 119212 kB' 'Mapped: 48336 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170928 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82644 kB' 'KernelStack: 6456 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.211 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:16.212 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6554024 kB' 'MemAvailable: 9494764 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489440 kB' 'Inactive: 2771552 kB' 'Active(anon): 127232 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118404 kB' 'Mapped: 48204 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170928 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82644 kB' 'KernelStack: 6328 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.213 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6554284 kB' 'MemAvailable: 9495024 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489700 kB' 'Inactive: 2771552 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118664 kB' 'Mapped: 48204 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170928 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82644 kB' 'KernelStack: 6328 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.214 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.215 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:46:16.216 nr_hugepages=1024 00:46:16.216 resv_hugepages=0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:46:16.216 surplus_hugepages=0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:46:16.216 anon_hugepages=0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6553548 kB' 'MemAvailable: 9494288 kB' 'Buffers: 2436 kB' 'Cached: 3141792 kB' 'SwapCached: 0 kB' 'Active: 489780 kB' 'Inactive: 2771552 kB' 'Active(anon): 127572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118744 kB' 'Mapped: 48068 kB' 'Shmem: 10468 kB' 'KReclaimable: 88284 kB' 'Slab: 170928 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82644 kB' 'KernelStack: 6288 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.216 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.217 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.478 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6553548 kB' 'MemUsed: 5688424 kB' 'SwapCached: 0 kB' 'Active: 489860 kB' 'Inactive: 2771552 kB' 'Active(anon): 127652 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2771552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3144228 kB' 'Mapped: 48588 kB' 'AnonPages: 118708 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88284 kB' 'Slab: 170928 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 82644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.479 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.480 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:46:16.481 node0=1024 expecting 1024 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:46:16.481 00:46:16.481 real 0m1.401s 00:46:16.481 user 0m0.638s 00:46:16.481 sys 0m0.863s 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:16.481 10:41:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:46:16.481 ************************************ 00:46:16.481 END TEST no_shrink_alloc 00:46:16.481 ************************************ 00:46:16.481 10:41:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:46:16.481 10:41:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:46:16.481 00:46:16.481 real 0m6.158s 00:46:16.481 user 0m2.674s 00:46:16.481 sys 0m3.676s 00:46:16.481 10:41:24 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:16.481 10:41:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:46:16.481 ************************************ 00:46:16.481 END TEST hugepages 00:46:16.481 ************************************ 00:46:16.481 10:41:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:46:16.481 10:41:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:46:16.481 10:41:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:16.481 10:41:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:16.481 10:41:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:46:16.481 ************************************ 00:46:16.481 START TEST driver 00:46:16.481 ************************************ 00:46:16.481 10:41:24 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:46:16.740 * Looking for test storage... 00:46:16.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:46:16.740 10:41:24 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:46:16.740 10:41:24 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:46:16.740 10:41:24 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:17.306 10:41:25 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:46:17.306 10:41:25 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:17.306 10:41:25 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:17.306 10:41:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:46:17.306 ************************************ 00:46:17.306 START TEST guess_driver 00:46:17.306 ************************************ 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:46:17.307 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:46:17.565 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:46:17.565 Looking for driver=uio_pci_generic 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:46:17.565 10:41:25 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:18.133 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:46:18.133 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:46:18.133 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:46:18.392 10:41:26 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:19.328 00:46:19.328 real 0m1.839s 00:46:19.328 user 0m0.628s 00:46:19.328 sys 0m1.267s 00:46:19.328 10:41:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:19.328 10:41:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:46:19.328 ************************************ 00:46:19.328 END TEST guess_driver 00:46:19.328 ************************************ 00:46:19.328 10:41:27 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:46:19.328 00:46:19.328 real 0m2.819s 00:46:19.328 user 0m0.957s 00:46:19.328 sys 0m2.010s 00:46:19.328 10:41:27 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:19.328 10:41:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:46:19.328 ************************************ 00:46:19.328 END TEST driver 00:46:19.328 ************************************ 00:46:19.328 10:41:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:46:19.328 10:41:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:46:19.328 10:41:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:19.328 10:41:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:19.328 10:41:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:46:19.328 ************************************ 00:46:19.328 START TEST devices 00:46:19.328 ************************************ 00:46:19.328 10:41:27 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:46:19.586 * Looking for test storage... 00:46:19.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:46:19.587 10:41:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:46:19.587 10:41:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:46:19.587 10:41:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:46:19.587 10:41:27 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:20.522 10:41:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:20.522 10:41:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:46:20.523 10:41:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:46:20.523 10:41:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:46:20.523 10:41:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:46:20.523 No valid GPT data, bailing 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:46:20.523 10:41:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:46:20.523 10:41:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:20.523 10:41:28 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:46:20.523 No valid GPT data, bailing 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:46:20.523 10:41:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:46:20.523 10:41:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:46:20.523 10:41:28 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:46:20.523 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:46:20.523 10:41:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:46:20.781 No valid GPT data, bailing 00:46:20.781 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:46:20.781 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:46:20.781 10:41:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:46:20.781 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:46:20.781 10:41:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:46:20.781 10:41:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:46:20.781 10:41:28 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:46:20.781 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:46:20.781 10:41:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:46:20.782 10:41:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:46:20.782 10:41:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:46:20.782 No valid GPT data, bailing 00:46:20.782 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:46:20.782 10:41:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:46:20.782 10:41:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:46:20.782 10:41:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:46:20.782 10:41:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:46:20.782 10:41:28 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:46:20.782 10:41:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:46:20.782 10:41:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:20.782 10:41:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:20.782 10:41:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:46:20.782 ************************************ 00:46:20.782 START TEST nvme_mount 00:46:20.782 ************************************ 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:46:20.782 10:41:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:46:21.716 Creating new GPT entries in memory. 00:46:21.716 GPT data structures destroyed! You may now partition the disk using fdisk or 00:46:21.716 other utilities. 00:46:21.716 10:41:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:46:21.716 10:41:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:46:21.716 10:41:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:46:21.716 10:41:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:46:21.716 10:41:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:46:23.091 Creating new GPT entries in memory. 00:46:23.091 The operation has completed successfully. 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 72025 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:46:23.091 10:41:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:23.350 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:46:23.609 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:46:23.609 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:46:23.868 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:46:23.868 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:46:23.868 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:46:23.868 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:46:23.868 10:41:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:24.436 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:46:24.738 10:41:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:25.013 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:25.013 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:46:25.013 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:46:25.013 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:25.013 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:25.013 10:41:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:25.270 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:25.270 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:25.270 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:25.270 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:25.528 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:46:25.528 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:46:25.529 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:46:25.529 00:46:25.529 real 0m4.658s 00:46:25.529 user 0m0.876s 00:46:25.529 sys 0m1.520s 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:25.529 10:41:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:46:25.529 ************************************ 00:46:25.529 END TEST nvme_mount 00:46:25.529 ************************************ 00:46:25.529 10:41:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:46:25.529 10:41:33 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:46:25.529 10:41:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:25.529 10:41:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:25.529 10:41:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:46:25.529 ************************************ 00:46:25.529 START TEST dm_mount 00:46:25.529 ************************************ 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:46:25.529 10:41:33 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:46:26.462 Creating new GPT entries in memory. 00:46:26.462 GPT data structures destroyed! You may now partition the disk using fdisk or 00:46:26.462 other utilities. 00:46:26.462 10:41:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:46:26.462 10:41:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:46:26.462 10:41:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:46:26.462 10:41:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:46:26.462 10:41:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:46:27.839 Creating new GPT entries in memory. 00:46:27.839 The operation has completed successfully. 00:46:27.839 10:41:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:46:27.839 10:41:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:46:27.839 10:41:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:46:27.839 10:41:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:46:27.839 10:41:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:46:28.773 The operation has completed successfully. 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72467 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:46:28.773 10:41:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:29.032 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:29.032 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:46:29.032 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:46:29.032 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:29.032 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:29.032 10:41:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:29.290 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:29.290 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:29.290 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:29.290 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:46:29.548 10:41:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:46:29.806 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:29.806 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:46:29.806 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:46:29.806 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:29.806 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:29.806 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:46:30.064 10:41:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:46:30.322 10:41:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:46:30.322 10:41:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:46:30.322 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:46:30.322 10:41:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:46:30.322 10:41:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:46:30.322 00:46:30.322 real 0m4.719s 00:46:30.322 user 0m0.601s 00:46:30.322 sys 0m1.071s 00:46:30.322 ************************************ 00:46:30.322 END TEST dm_mount 00:46:30.322 ************************************ 00:46:30.322 10:41:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:30.322 10:41:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:46:30.322 10:41:38 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:46:30.322 10:41:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:46:30.580 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:46:30.580 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:46:30.580 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:46:30.580 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:46:30.580 10:41:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:46:30.580 00:46:30.580 real 0m11.219s 00:46:30.580 user 0m2.183s 00:46:30.580 sys 0m3.447s 00:46:30.580 10:41:38 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:30.580 10:41:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:46:30.580 ************************************ 00:46:30.580 END TEST devices 00:46:30.580 ************************************ 00:46:30.580 10:41:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:46:30.580 ************************************ 00:46:30.580 END TEST setup.sh 00:46:30.580 ************************************ 00:46:30.580 00:46:30.580 real 0m26.702s 00:46:30.580 user 0m8.292s 00:46:30.580 sys 0m13.205s 00:46:30.580 10:41:38 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:30.580 10:41:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:46:30.838 10:41:38 -- common/autotest_common.sh@1142 -- # return 0 00:46:30.838 10:41:38 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:46:31.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:31.404 Hugepages 00:46:31.404 node hugesize free / total 00:46:31.404 node0 1048576kB 0 / 0 00:46:31.404 node0 2048kB 2048 / 2048 00:46:31.404 00:46:31.404 Type BDF Vendor Device NUMA Driver Device Block devices 00:46:31.661 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:46:31.661 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:46:31.920 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:46:31.920 10:41:39 -- spdk/autotest.sh@130 -- # uname -s 00:46:31.920 10:41:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:46:31.920 10:41:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:46:31.920 10:41:39 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:32.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:32.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:32.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:32.853 10:41:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:46:34.229 10:41:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:46:34.229 10:41:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:46:34.229 10:41:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:46:34.229 10:41:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:46:34.229 10:41:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:46:34.229 10:41:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:46:34.229 10:41:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:34.229 10:41:41 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:34.229 10:41:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:46:34.229 10:41:41 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:46:34.229 10:41:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:46:34.229 10:41:41 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:34.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:34.489 Waiting for block devices as requested 00:46:34.749 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:34.749 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:34.749 10:41:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:46:34.749 10:41:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:46:34.749 10:41:42 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:46:34.749 10:41:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:46:34.749 10:41:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:46:34.749 10:41:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:46:34.749 10:41:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:46:34.749 10:41:42 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:46:34.749 10:41:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:46:34.749 10:41:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:46:34.749 10:41:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:46:34.749 10:41:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:46:34.749 10:41:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:46:34.749 10:41:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:46:34.749 10:41:42 -- common/autotest_common.sh@1557 -- # continue 00:46:34.749 10:41:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:46:34.749 10:41:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:46:34.749 10:41:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:46:34.749 10:41:42 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:46:34.749 10:41:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:46:34.749 10:41:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:46:34.749 10:41:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:46:34.749 10:41:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:46:34.749 10:41:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:46:34.749 10:41:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:46:35.009 10:41:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:46:35.009 10:41:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:46:35.009 10:41:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:46:35.009 10:41:42 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:46:35.009 10:41:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:46:35.009 10:41:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:46:35.009 10:41:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:46:35.009 10:41:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:46:35.009 10:41:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:46:35.009 10:41:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:46:35.009 10:41:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:46:35.009 10:41:42 -- common/autotest_common.sh@1557 -- # continue 00:46:35.009 10:41:42 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:46:35.009 10:41:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:35.009 10:41:42 -- common/autotest_common.sh@10 -- # set +x 00:46:35.009 10:41:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:46:35.009 10:41:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:35.009 10:41:42 -- common/autotest_common.sh@10 -- # set +x 00:46:35.009 10:41:42 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:35.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:35.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:35.942 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:35.942 10:41:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:46:35.942 10:41:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:35.942 10:41:43 -- common/autotest_common.sh@10 -- # set +x 00:46:35.942 10:41:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:46:35.942 10:41:43 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:46:35.942 10:41:43 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:46:35.942 10:41:43 -- common/autotest_common.sh@1577 -- # bdfs=() 00:46:35.942 10:41:43 -- common/autotest_common.sh@1577 -- # local bdfs 00:46:35.942 10:41:43 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:46:35.942 10:41:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:46:35.942 10:41:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:46:35.942 10:41:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:35.942 10:41:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:35.942 10:41:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:46:36.200 10:41:43 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:46:36.200 10:41:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:46:36.200 10:41:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:46:36.200 10:41:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:46:36.200 10:41:43 -- common/autotest_common.sh@1580 -- # device=0x0010 00:46:36.200 10:41:43 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:46:36.200 10:41:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:46:36.200 10:41:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:46:36.200 10:41:43 -- common/autotest_common.sh@1580 -- # device=0x0010 00:46:36.200 10:41:43 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:46:36.200 10:41:43 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:46:36.200 10:41:43 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:46:36.200 10:41:43 -- common/autotest_common.sh@1593 -- # return 0 00:46:36.200 10:41:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:46:36.200 10:41:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:46:36.200 10:41:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:46:36.200 10:41:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:46:36.200 10:41:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:46:36.200 10:41:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:36.200 10:41:43 -- common/autotest_common.sh@10 -- # set +x 00:46:36.200 10:41:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:46:36.200 10:41:43 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:46:36.200 10:41:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:36.200 10:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:36.200 10:41:43 -- common/autotest_common.sh@10 -- # set +x 00:46:36.200 ************************************ 00:46:36.200 START TEST env 00:46:36.200 ************************************ 00:46:36.200 10:41:43 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:46:36.200 * Looking for test storage... 00:46:36.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:46:36.200 10:41:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:46:36.200 10:41:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:36.200 10:41:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:36.200 10:41:44 env -- common/autotest_common.sh@10 -- # set +x 00:46:36.200 ************************************ 00:46:36.200 START TEST env_memory 00:46:36.200 ************************************ 00:46:36.200 10:41:44 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:46:36.457 00:46:36.457 00:46:36.457 CUnit - A unit testing framework for C - Version 2.1-3 00:46:36.457 http://cunit.sourceforge.net/ 00:46:36.457 00:46:36.457 00:46:36.457 Suite: memory 00:46:36.457 Test: alloc and free memory map ...[2024-07-22 10:41:44.170360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:46:36.457 passed 00:46:36.457 Test: mem map translation ...[2024-07-22 10:41:44.192324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:46:36.457 [2024-07-22 10:41:44.192589] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:46:36.457 [2024-07-22 10:41:44.193196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:46:36.457 [2024-07-22 10:41:44.193631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:46:36.457 passed 00:46:36.457 Test: mem map registration ...[2024-07-22 10:41:44.230550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:46:36.457 [2024-07-22 10:41:44.230776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:46:36.457 passed 00:46:36.457 Test: mem map adjacent registrations ...passed 00:46:36.457 00:46:36.457 Run Summary: Type Total Ran Passed Failed Inactive 00:46:36.457 suites 1 1 n/a 0 0 00:46:36.457 tests 4 4 4 0 0 00:46:36.457 asserts 152 152 152 0 n/a 00:46:36.457 00:46:36.457 Elapsed time = 0.131 seconds 00:46:36.457 00:46:36.457 real 0m0.160s 00:46:36.457 user 0m0.128s 00:46:36.457 sys 0m0.023s 00:46:36.457 10:41:44 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:36.457 10:41:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:46:36.457 ************************************ 00:46:36.457 END TEST env_memory 00:46:36.457 ************************************ 00:46:36.457 10:41:44 env -- common/autotest_common.sh@1142 -- # return 0 00:46:36.457 10:41:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:46:36.457 10:41:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:36.457 10:41:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:36.457 10:41:44 env -- common/autotest_common.sh@10 -- # set +x 00:46:36.457 ************************************ 00:46:36.457 START TEST env_vtophys 00:46:36.457 ************************************ 00:46:36.457 10:41:44 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:46:36.457 EAL: lib.eal log level changed from notice to debug 00:46:36.457 EAL: Detected lcore 0 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 1 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 2 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 3 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 4 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 5 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 6 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 7 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 8 as core 0 on socket 0 00:46:36.457 EAL: Detected lcore 9 as core 0 on socket 0 00:46:36.716 EAL: Maximum logical cores by configuration: 128 00:46:36.716 EAL: Detected CPU lcores: 10 00:46:36.716 EAL: Detected NUMA nodes: 1 00:46:36.716 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:46:36.716 EAL: Detected shared linkage of DPDK 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:46:36.716 EAL: Registered [vdev] bus. 00:46:36.716 EAL: bus.vdev log level changed from disabled to notice 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:46:36.716 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:46:36.716 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:46:36.716 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:46:36.716 EAL: No shared files mode enabled, IPC will be disabled 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Selected IOVA mode 'PA' 00:46:36.716 EAL: Probing VFIO support... 00:46:36.716 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:46:36.716 EAL: VFIO modules not loaded, skipping VFIO support... 00:46:36.716 EAL: Ask a virtual area of 0x2e000 bytes 00:46:36.716 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:46:36.716 EAL: Setting up physically contiguous memory... 00:46:36.716 EAL: Setting maximum number of open files to 524288 00:46:36.716 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:46:36.716 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:46:36.716 EAL: Ask a virtual area of 0x61000 bytes 00:46:36.716 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:46:36.716 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:46:36.716 EAL: Ask a virtual area of 0x400000000 bytes 00:46:36.716 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:46:36.716 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:46:36.716 EAL: Ask a virtual area of 0x61000 bytes 00:46:36.716 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:46:36.716 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:46:36.716 EAL: Ask a virtual area of 0x400000000 bytes 00:46:36.716 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:46:36.716 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:46:36.716 EAL: Ask a virtual area of 0x61000 bytes 00:46:36.716 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:46:36.716 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:46:36.716 EAL: Ask a virtual area of 0x400000000 bytes 00:46:36.716 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:46:36.716 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:46:36.716 EAL: Ask a virtual area of 0x61000 bytes 00:46:36.716 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:46:36.716 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:46:36.716 EAL: Ask a virtual area of 0x400000000 bytes 00:46:36.716 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:46:36.716 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:46:36.716 EAL: Hugepages will be freed exactly as allocated. 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: TSC frequency is ~2490000 KHz 00:46:36.716 EAL: Main lcore 0 is ready (tid=7f546033aa00;cpuset=[0]) 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 0 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 2MB 00:46:36.716 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Mem event callback 'spdk:(nil)' registered 00:46:36.716 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:46:36.716 00:46:36.716 00:46:36.716 CUnit - A unit testing framework for C - Version 2.1-3 00:46:36.716 http://cunit.sourceforge.net/ 00:46:36.716 00:46:36.716 00:46:36.716 Suite: components_suite 00:46:36.716 Test: vtophys_malloc_test ...passed 00:46:36.716 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 4MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 4MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 6MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 6MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 10MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 10MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 18MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 18MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 34MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 34MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 66MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 66MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.716 EAL: Restoring previous memory policy: 4 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was expanded by 130MB 00:46:36.716 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.716 EAL: request: mp_malloc_sync 00:46:36.716 EAL: No shared files mode enabled, IPC is disabled 00:46:36.716 EAL: Heap on socket 0 was shrunk by 130MB 00:46:36.716 EAL: Trying to obtain current memory policy. 00:46:36.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.973 EAL: Restoring previous memory policy: 4 00:46:36.973 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.973 EAL: request: mp_malloc_sync 00:46:36.973 EAL: No shared files mode enabled, IPC is disabled 00:46:36.973 EAL: Heap on socket 0 was expanded by 258MB 00:46:36.973 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.973 EAL: request: mp_malloc_sync 00:46:36.973 EAL: No shared files mode enabled, IPC is disabled 00:46:36.973 EAL: Heap on socket 0 was shrunk by 258MB 00:46:36.973 EAL: Trying to obtain current memory policy. 00:46:36.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:36.973 EAL: Restoring previous memory policy: 4 00:46:36.973 EAL: Calling mem event callback 'spdk:(nil)' 00:46:36.973 EAL: request: mp_malloc_sync 00:46:36.974 EAL: No shared files mode enabled, IPC is disabled 00:46:36.974 EAL: Heap on socket 0 was expanded by 514MB 00:46:37.231 EAL: Calling mem event callback 'spdk:(nil)' 00:46:37.231 EAL: request: mp_malloc_sync 00:46:37.231 EAL: No shared files mode enabled, IPC is disabled 00:46:37.231 EAL: Heap on socket 0 was shrunk by 514MB 00:46:37.231 EAL: Trying to obtain current memory policy. 00:46:37.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:46:37.488 EAL: Restoring previous memory policy: 4 00:46:37.488 EAL: Calling mem event callback 'spdk:(nil)' 00:46:37.488 EAL: request: mp_malloc_sync 00:46:37.488 EAL: No shared files mode enabled, IPC is disabled 00:46:37.488 EAL: Heap on socket 0 was expanded by 1026MB 00:46:37.488 EAL: Calling mem event callback 'spdk:(nil)' 00:46:37.747 passed 00:46:37.747 00:46:37.747 Run Summary: Type Total Ran Passed Failed Inactive 00:46:37.747 suites 1 1 n/a 0 0 00:46:37.747 tests 2 2 2 0 0 00:46:37.747 asserts 5253 5253 5253 0 n/a 00:46:37.747 00:46:37.747 Elapsed time = 0.967 seconds 00:46:37.747 EAL: request: mp_malloc_sync 00:46:37.747 EAL: No shared files mode enabled, IPC is disabled 00:46:37.747 EAL: Heap on socket 0 was shrunk by 1026MB 00:46:37.747 EAL: Calling mem event callback 'spdk:(nil)' 00:46:37.747 EAL: request: mp_malloc_sync 00:46:37.747 EAL: No shared files mode enabled, IPC is disabled 00:46:37.747 EAL: Heap on socket 0 was shrunk by 2MB 00:46:37.747 EAL: No shared files mode enabled, IPC is disabled 00:46:37.747 EAL: No shared files mode enabled, IPC is disabled 00:46:37.747 EAL: No shared files mode enabled, IPC is disabled 00:46:37.747 00:46:37.747 real 0m1.174s 00:46:37.747 user 0m0.639s 00:46:37.747 sys 0m0.400s 00:46:37.747 10:41:45 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:37.747 ************************************ 00:46:37.747 END TEST env_vtophys 00:46:37.747 ************************************ 00:46:37.747 10:41:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:46:37.747 10:41:45 env -- common/autotest_common.sh@1142 -- # return 0 00:46:37.747 10:41:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:46:37.747 10:41:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:37.747 10:41:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:37.747 10:41:45 env -- common/autotest_common.sh@10 -- # set +x 00:46:37.747 ************************************ 00:46:37.747 START TEST env_pci 00:46:37.747 ************************************ 00:46:37.747 10:41:45 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:46:37.747 00:46:37.747 00:46:37.747 CUnit - A unit testing framework for C - Version 2.1-3 00:46:37.747 http://cunit.sourceforge.net/ 00:46:37.747 00:46:37.747 00:46:37.747 Suite: pci 00:46:37.747 Test: pci_hook ...[2024-07-22 10:41:45.617534] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73671 has claimed it 00:46:37.747 passed 00:46:37.747 00:46:37.747 Run Summary: Type Total Ran Passed Failed Inactive 00:46:37.747 suites 1 1 n/a 0 0 00:46:37.747 tests 1 1 1 0 0 00:46:37.747 asserts 25 25 25 0 n/a 00:46:37.747 00:46:37.747 Elapsed time = 0.003 seconds 00:46:37.747 EAL: Cannot find device (10000:00:01.0) 00:46:37.747 EAL: Failed to attach device on primary process 00:46:37.747 00:46:37.747 real 0m0.028s 00:46:37.747 user 0m0.011s 00:46:37.747 sys 0m0.016s 00:46:37.747 10:41:45 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:37.747 10:41:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:46:37.747 ************************************ 00:46:37.747 END TEST env_pci 00:46:37.747 ************************************ 00:46:38.007 10:41:45 env -- common/autotest_common.sh@1142 -- # return 0 00:46:38.007 10:41:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:46:38.007 10:41:45 env -- env/env.sh@15 -- # uname 00:46:38.007 10:41:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:46:38.007 10:41:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:46:38.007 10:41:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:46:38.007 10:41:45 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:46:38.007 10:41:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:38.007 10:41:45 env -- common/autotest_common.sh@10 -- # set +x 00:46:38.007 ************************************ 00:46:38.007 START TEST env_dpdk_post_init 00:46:38.007 ************************************ 00:46:38.007 10:41:45 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:46:38.007 EAL: Detected CPU lcores: 10 00:46:38.007 EAL: Detected NUMA nodes: 1 00:46:38.007 EAL: Detected shared linkage of DPDK 00:46:38.007 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:46:38.007 EAL: Selected IOVA mode 'PA' 00:46:38.007 Starting DPDK initialization... 00:46:38.007 Starting SPDK post initialization... 00:46:38.007 SPDK NVMe probe 00:46:38.007 Attaching to 0000:00:10.0 00:46:38.007 Attaching to 0000:00:11.0 00:46:38.007 Attached to 0000:00:10.0 00:46:38.007 Attached to 0000:00:11.0 00:46:38.007 Cleaning up... 00:46:38.007 ************************************ 00:46:38.007 END TEST env_dpdk_post_init 00:46:38.007 ************************************ 00:46:38.007 00:46:38.007 real 0m0.190s 00:46:38.007 user 0m0.054s 00:46:38.007 sys 0m0.036s 00:46:38.007 10:41:45 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:38.007 10:41:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:46:38.265 10:41:45 env -- common/autotest_common.sh@1142 -- # return 0 00:46:38.265 10:41:45 env -- env/env.sh@26 -- # uname 00:46:38.265 10:41:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:46:38.265 10:41:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:46:38.265 10:41:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:38.266 10:41:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:38.266 10:41:45 env -- common/autotest_common.sh@10 -- # set +x 00:46:38.266 ************************************ 00:46:38.266 START TEST env_mem_callbacks 00:46:38.266 ************************************ 00:46:38.266 10:41:45 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:46:38.266 EAL: Detected CPU lcores: 10 00:46:38.266 EAL: Detected NUMA nodes: 1 00:46:38.266 EAL: Detected shared linkage of DPDK 00:46:38.266 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:46:38.266 EAL: Selected IOVA mode 'PA' 00:46:38.266 00:46:38.266 00:46:38.266 CUnit - A unit testing framework for C - Version 2.1-3 00:46:38.266 http://cunit.sourceforge.net/ 00:46:38.266 00:46:38.266 00:46:38.266 Suite: memory 00:46:38.266 Test: test ... 00:46:38.266 register 0x200000200000 2097152 00:46:38.266 malloc 3145728 00:46:38.266 register 0x200000400000 4194304 00:46:38.266 buf 0x200000500000 len 3145728 PASSED 00:46:38.266 malloc 64 00:46:38.266 buf 0x2000004fff40 len 64 PASSED 00:46:38.266 malloc 4194304 00:46:38.266 register 0x200000800000 6291456 00:46:38.266 buf 0x200000a00000 len 4194304 PASSED 00:46:38.266 free 0x200000500000 3145728 00:46:38.266 free 0x2000004fff40 64 00:46:38.266 unregister 0x200000400000 4194304 PASSED 00:46:38.266 free 0x200000a00000 4194304 00:46:38.266 unregister 0x200000800000 6291456 PASSED 00:46:38.266 malloc 8388608 00:46:38.266 register 0x200000400000 10485760 00:46:38.266 buf 0x200000600000 len 8388608 PASSED 00:46:38.266 free 0x200000600000 8388608 00:46:38.266 unregister 0x200000400000 10485760 PASSED 00:46:38.266 passed 00:46:38.266 00:46:38.266 Run Summary: Type Total Ran Passed Failed Inactive 00:46:38.266 suites 1 1 n/a 0 0 00:46:38.266 tests 1 1 1 0 0 00:46:38.266 asserts 15 15 15 0 n/a 00:46:38.266 00:46:38.266 Elapsed time = 0.008 seconds 00:46:38.266 ************************************ 00:46:38.266 END TEST env_mem_callbacks 00:46:38.266 ************************************ 00:46:38.266 00:46:38.266 real 0m0.154s 00:46:38.266 user 0m0.024s 00:46:38.266 sys 0m0.027s 00:46:38.266 10:41:46 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:38.266 10:41:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:46:38.266 10:41:46 env -- common/autotest_common.sh@1142 -- # return 0 00:46:38.266 ************************************ 00:46:38.266 END TEST env 00:46:38.266 ************************************ 00:46:38.266 00:46:38.266 real 0m2.200s 00:46:38.266 user 0m1.038s 00:46:38.266 sys 0m0.811s 00:46:38.266 10:41:46 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:38.266 10:41:46 env -- common/autotest_common.sh@10 -- # set +x 00:46:38.524 10:41:46 -- common/autotest_common.sh@1142 -- # return 0 00:46:38.524 10:41:46 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:46:38.524 10:41:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:38.524 10:41:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:38.524 10:41:46 -- common/autotest_common.sh@10 -- # set +x 00:46:38.524 ************************************ 00:46:38.524 START TEST rpc 00:46:38.524 ************************************ 00:46:38.524 10:41:46 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:46:38.524 * Looking for test storage... 00:46:38.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:46:38.524 10:41:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73786 00:46:38.524 10:41:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:46:38.524 10:41:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:46:38.524 10:41:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73786 00:46:38.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:38.524 10:41:46 rpc -- common/autotest_common.sh@829 -- # '[' -z 73786 ']' 00:46:38.524 10:41:46 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:38.524 10:41:46 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:38.525 10:41:46 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:38.525 10:41:46 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:38.525 10:41:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:38.525 [2024-07-22 10:41:46.436609] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:46:38.525 [2024-07-22 10:41:46.436682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73786 ] 00:46:38.783 [2024-07-22 10:41:46.554070] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:38.783 [2024-07-22 10:41:46.579122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:38.783 [2024-07-22 10:41:46.619040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:46:38.783 [2024-07-22 10:41:46.619301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73786' to capture a snapshot of events at runtime. 00:46:38.783 [2024-07-22 10:41:46.619451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:38.783 [2024-07-22 10:41:46.619497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:38.783 [2024-07-22 10:41:46.619522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73786 for offline analysis/debug. 00:46:38.783 [2024-07-22 10:41:46.619596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:39.350 10:41:47 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:39.350 10:41:47 rpc -- common/autotest_common.sh@862 -- # return 0 00:46:39.350 10:41:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:46:39.350 10:41:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:46:39.350 10:41:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:46:39.350 10:41:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:46:39.350 10:41:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:39.350 10:41:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:39.350 10:41:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.350 ************************************ 00:46:39.350 START TEST rpc_integrity 00:46:39.350 ************************************ 00:46:39.350 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:46:39.350 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:39.350 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.350 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.350 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.350 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:46:39.608 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:46:39.609 { 00:46:39.609 "aliases": [ 00:46:39.609 "7d624b10-d189-4e0d-af3c-e5c3a446d7fa" 00:46:39.609 ], 00:46:39.609 "assigned_rate_limits": { 00:46:39.609 "r_mbytes_per_sec": 0, 00:46:39.609 "rw_ios_per_sec": 0, 00:46:39.609 "rw_mbytes_per_sec": 0, 00:46:39.609 "w_mbytes_per_sec": 0 00:46:39.609 }, 00:46:39.609 "block_size": 512, 00:46:39.609 "claimed": false, 00:46:39.609 "driver_specific": {}, 00:46:39.609 "memory_domains": [ 00:46:39.609 { 00:46:39.609 "dma_device_id": "system", 00:46:39.609 "dma_device_type": 1 00:46:39.609 }, 00:46:39.609 { 00:46:39.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:39.609 "dma_device_type": 2 00:46:39.609 } 00:46:39.609 ], 00:46:39.609 "name": "Malloc0", 00:46:39.609 "num_blocks": 16384, 00:46:39.609 "product_name": "Malloc disk", 00:46:39.609 "supported_io_types": { 00:46:39.609 "abort": true, 00:46:39.609 "compare": false, 00:46:39.609 "compare_and_write": false, 00:46:39.609 "copy": true, 00:46:39.609 "flush": true, 00:46:39.609 "get_zone_info": false, 00:46:39.609 "nvme_admin": false, 00:46:39.609 "nvme_io": false, 00:46:39.609 "nvme_io_md": false, 00:46:39.609 "nvme_iov_md": false, 00:46:39.609 "read": true, 00:46:39.609 "reset": true, 00:46:39.609 "seek_data": false, 00:46:39.609 "seek_hole": false, 00:46:39.609 "unmap": true, 00:46:39.609 "write": true, 00:46:39.609 "write_zeroes": true, 00:46:39.609 "zcopy": true, 00:46:39.609 "zone_append": false, 00:46:39.609 "zone_management": false 00:46:39.609 }, 00:46:39.609 "uuid": "7d624b10-d189-4e0d-af3c-e5c3a446d7fa", 00:46:39.609 "zoned": false 00:46:39.609 } 00:46:39.609 ]' 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 [2024-07-22 10:41:47.405262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:46:39.609 [2024-07-22 10:41:47.405309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:39.609 [2024-07-22 10:41:47.405327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e18f40 00:46:39.609 [2024-07-22 10:41:47.405335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:39.609 [2024-07-22 10:41:47.406504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:39.609 [2024-07-22 10:41:47.406537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:46:39.609 Passthru0 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:46:39.609 { 00:46:39.609 "aliases": [ 00:46:39.609 "7d624b10-d189-4e0d-af3c-e5c3a446d7fa" 00:46:39.609 ], 00:46:39.609 "assigned_rate_limits": { 00:46:39.609 "r_mbytes_per_sec": 0, 00:46:39.609 "rw_ios_per_sec": 0, 00:46:39.609 "rw_mbytes_per_sec": 0, 00:46:39.609 "w_mbytes_per_sec": 0 00:46:39.609 }, 00:46:39.609 "block_size": 512, 00:46:39.609 "claim_type": "exclusive_write", 00:46:39.609 "claimed": true, 00:46:39.609 "driver_specific": {}, 00:46:39.609 "memory_domains": [ 00:46:39.609 { 00:46:39.609 "dma_device_id": "system", 00:46:39.609 "dma_device_type": 1 00:46:39.609 }, 00:46:39.609 { 00:46:39.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:39.609 "dma_device_type": 2 00:46:39.609 } 00:46:39.609 ], 00:46:39.609 "name": "Malloc0", 00:46:39.609 "num_blocks": 16384, 00:46:39.609 "product_name": "Malloc disk", 00:46:39.609 "supported_io_types": { 00:46:39.609 "abort": true, 00:46:39.609 "compare": false, 00:46:39.609 "compare_and_write": false, 00:46:39.609 "copy": true, 00:46:39.609 "flush": true, 00:46:39.609 "get_zone_info": false, 00:46:39.609 "nvme_admin": false, 00:46:39.609 "nvme_io": false, 00:46:39.609 "nvme_io_md": false, 00:46:39.609 "nvme_iov_md": false, 00:46:39.609 "read": true, 00:46:39.609 "reset": true, 00:46:39.609 "seek_data": false, 00:46:39.609 "seek_hole": false, 00:46:39.609 "unmap": true, 00:46:39.609 "write": true, 00:46:39.609 "write_zeroes": true, 00:46:39.609 "zcopy": true, 00:46:39.609 "zone_append": false, 00:46:39.609 "zone_management": false 00:46:39.609 }, 00:46:39.609 "uuid": "7d624b10-d189-4e0d-af3c-e5c3a446d7fa", 00:46:39.609 "zoned": false 00:46:39.609 }, 00:46:39.609 { 00:46:39.609 "aliases": [ 00:46:39.609 "a38b8b18-ee58-58db-a6e7-3457aae12481" 00:46:39.609 ], 00:46:39.609 "assigned_rate_limits": { 00:46:39.609 "r_mbytes_per_sec": 0, 00:46:39.609 "rw_ios_per_sec": 0, 00:46:39.609 "rw_mbytes_per_sec": 0, 00:46:39.609 "w_mbytes_per_sec": 0 00:46:39.609 }, 00:46:39.609 "block_size": 512, 00:46:39.609 "claimed": false, 00:46:39.609 "driver_specific": { 00:46:39.609 "passthru": { 00:46:39.609 "base_bdev_name": "Malloc0", 00:46:39.609 "name": "Passthru0" 00:46:39.609 } 00:46:39.609 }, 00:46:39.609 "memory_domains": [ 00:46:39.609 { 00:46:39.609 "dma_device_id": "system", 00:46:39.609 "dma_device_type": 1 00:46:39.609 }, 00:46:39.609 { 00:46:39.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:39.609 "dma_device_type": 2 00:46:39.609 } 00:46:39.609 ], 00:46:39.609 "name": "Passthru0", 00:46:39.609 "num_blocks": 16384, 00:46:39.609 "product_name": "passthru", 00:46:39.609 "supported_io_types": { 00:46:39.609 "abort": true, 00:46:39.609 "compare": false, 00:46:39.609 "compare_and_write": false, 00:46:39.609 "copy": true, 00:46:39.609 "flush": true, 00:46:39.609 "get_zone_info": false, 00:46:39.609 "nvme_admin": false, 00:46:39.609 "nvme_io": false, 00:46:39.609 "nvme_io_md": false, 00:46:39.609 "nvme_iov_md": false, 00:46:39.609 "read": true, 00:46:39.609 "reset": true, 00:46:39.609 "seek_data": false, 00:46:39.609 "seek_hole": false, 00:46:39.609 "unmap": true, 00:46:39.609 "write": true, 00:46:39.609 "write_zeroes": true, 00:46:39.609 "zcopy": true, 00:46:39.609 "zone_append": false, 00:46:39.609 "zone_management": false 00:46:39.609 }, 00:46:39.609 "uuid": "a38b8b18-ee58-58db-a6e7-3457aae12481", 00:46:39.609 "zoned": false 00:46:39.609 } 00:46:39.609 ]' 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.609 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:46:39.609 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:46:39.869 10:41:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:46:39.869 00:46:39.869 real 0m0.299s 00:46:39.869 user 0m0.178s 00:46:39.869 sys 0m0.054s 00:46:39.869 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:39.869 10:41:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 ************************************ 00:46:39.869 END TEST rpc_integrity 00:46:39.869 ************************************ 00:46:39.869 10:41:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:39.869 10:41:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:46:39.869 10:41:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:39.869 10:41:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:39.869 10:41:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 ************************************ 00:46:39.869 START TEST rpc_plugins 00:46:39.869 ************************************ 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:46:39.869 { 00:46:39.869 "aliases": [ 00:46:39.869 "7240fcb5-b067-4965-867f-dce367ba05b1" 00:46:39.869 ], 00:46:39.869 "assigned_rate_limits": { 00:46:39.869 "r_mbytes_per_sec": 0, 00:46:39.869 "rw_ios_per_sec": 0, 00:46:39.869 "rw_mbytes_per_sec": 0, 00:46:39.869 "w_mbytes_per_sec": 0 00:46:39.869 }, 00:46:39.869 "block_size": 4096, 00:46:39.869 "claimed": false, 00:46:39.869 "driver_specific": {}, 00:46:39.869 "memory_domains": [ 00:46:39.869 { 00:46:39.869 "dma_device_id": "system", 00:46:39.869 "dma_device_type": 1 00:46:39.869 }, 00:46:39.869 { 00:46:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:39.869 "dma_device_type": 2 00:46:39.869 } 00:46:39.869 ], 00:46:39.869 "name": "Malloc1", 00:46:39.869 "num_blocks": 256, 00:46:39.869 "product_name": "Malloc disk", 00:46:39.869 "supported_io_types": { 00:46:39.869 "abort": true, 00:46:39.869 "compare": false, 00:46:39.869 "compare_and_write": false, 00:46:39.869 "copy": true, 00:46:39.869 "flush": true, 00:46:39.869 "get_zone_info": false, 00:46:39.869 "nvme_admin": false, 00:46:39.869 "nvme_io": false, 00:46:39.869 "nvme_io_md": false, 00:46:39.869 "nvme_iov_md": false, 00:46:39.869 "read": true, 00:46:39.869 "reset": true, 00:46:39.869 "seek_data": false, 00:46:39.869 "seek_hole": false, 00:46:39.869 "unmap": true, 00:46:39.869 "write": true, 00:46:39.869 "write_zeroes": true, 00:46:39.869 "zcopy": true, 00:46:39.869 "zone_append": false, 00:46:39.869 "zone_management": false 00:46:39.869 }, 00:46:39.869 "uuid": "7240fcb5-b067-4965-867f-dce367ba05b1", 00:46:39.869 "zoned": false 00:46:39.869 } 00:46:39.869 ]' 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:46:39.869 10:41:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:46:39.869 00:46:39.869 real 0m0.156s 00:46:39.869 user 0m0.093s 00:46:39.869 sys 0m0.027s 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:39.869 10:41:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:46:39.869 ************************************ 00:46:39.869 END TEST rpc_plugins 00:46:39.869 ************************************ 00:46:40.128 10:41:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:40.129 10:41:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:46:40.129 10:41:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:40.129 10:41:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:40.129 10:41:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:40.129 ************************************ 00:46:40.129 START TEST rpc_trace_cmd_test 00:46:40.129 ************************************ 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:46:40.129 "bdev": { 00:46:40.129 "mask": "0x8", 00:46:40.129 "tpoint_mask": "0xffffffffffffffff" 00:46:40.129 }, 00:46:40.129 "bdev_nvme": { 00:46:40.129 "mask": "0x4000", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "blobfs": { 00:46:40.129 "mask": "0x80", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "dsa": { 00:46:40.129 "mask": "0x200", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "ftl": { 00:46:40.129 "mask": "0x40", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "iaa": { 00:46:40.129 "mask": "0x1000", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "iscsi_conn": { 00:46:40.129 "mask": "0x2", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "nvme_pcie": { 00:46:40.129 "mask": "0x800", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "nvme_tcp": { 00:46:40.129 "mask": "0x2000", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "nvmf_rdma": { 00:46:40.129 "mask": "0x10", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "nvmf_tcp": { 00:46:40.129 "mask": "0x20", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "scsi": { 00:46:40.129 "mask": "0x4", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "sock": { 00:46:40.129 "mask": "0x8000", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "thread": { 00:46:40.129 "mask": "0x400", 00:46:40.129 "tpoint_mask": "0x0" 00:46:40.129 }, 00:46:40.129 "tpoint_group_mask": "0x8", 00:46:40.129 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73786" 00:46:40.129 }' 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:46:40.129 10:41:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:46:40.129 10:41:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:46:40.129 10:41:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:46:40.129 10:41:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:46:40.129 00:46:40.129 real 0m0.193s 00:46:40.129 user 0m0.145s 00:46:40.129 sys 0m0.036s 00:46:40.129 10:41:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:40.129 ************************************ 00:46:40.129 END TEST rpc_trace_cmd_test 00:46:40.129 ************************************ 00:46:40.129 10:41:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:46:40.388 10:41:48 rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:40.388 10:41:48 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:46:40.388 10:41:48 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:46:40.388 10:41:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:40.388 10:41:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:40.388 10:41:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:40.388 ************************************ 00:46:40.388 START TEST go_rpc 00:46:40.388 ************************************ 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["a686f800-1a15-4df1-88fb-8484b270e06e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"a686f800-1a15-4df1-88fb-8484b270e06e","zoned":false}]' 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:46:40.388 10:41:48 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:46:40.388 00:46:40.388 real 0m0.194s 00:46:40.388 user 0m0.122s 00:46:40.388 sys 0m0.045s 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:40.388 10:41:48 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:40.388 ************************************ 00:46:40.388 END TEST go_rpc 00:46:40.388 ************************************ 00:46:40.646 10:41:48 rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:40.646 10:41:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:46:40.646 10:41:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:46:40.646 10:41:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:40.646 10:41:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:40.646 10:41:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:40.646 ************************************ 00:46:40.646 START TEST rpc_daemon_integrity 00:46:40.646 ************************************ 00:46:40.646 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:46:40.646 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:46:40.647 { 00:46:40.647 "aliases": [ 00:46:40.647 "9301a645-726a-413f-827b-64c588860445" 00:46:40.647 ], 00:46:40.647 "assigned_rate_limits": { 00:46:40.647 "r_mbytes_per_sec": 0, 00:46:40.647 "rw_ios_per_sec": 0, 00:46:40.647 "rw_mbytes_per_sec": 0, 00:46:40.647 "w_mbytes_per_sec": 0 00:46:40.647 }, 00:46:40.647 "block_size": 512, 00:46:40.647 "claimed": false, 00:46:40.647 "driver_specific": {}, 00:46:40.647 "memory_domains": [ 00:46:40.647 { 00:46:40.647 "dma_device_id": "system", 00:46:40.647 "dma_device_type": 1 00:46:40.647 }, 00:46:40.647 { 00:46:40.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:40.647 "dma_device_type": 2 00:46:40.647 } 00:46:40.647 ], 00:46:40.647 "name": "Malloc3", 00:46:40.647 "num_blocks": 16384, 00:46:40.647 "product_name": "Malloc disk", 00:46:40.647 "supported_io_types": { 00:46:40.647 "abort": true, 00:46:40.647 "compare": false, 00:46:40.647 "compare_and_write": false, 00:46:40.647 "copy": true, 00:46:40.647 "flush": true, 00:46:40.647 "get_zone_info": false, 00:46:40.647 "nvme_admin": false, 00:46:40.647 "nvme_io": false, 00:46:40.647 "nvme_io_md": false, 00:46:40.647 "nvme_iov_md": false, 00:46:40.647 "read": true, 00:46:40.647 "reset": true, 00:46:40.647 "seek_data": false, 00:46:40.647 "seek_hole": false, 00:46:40.647 "unmap": true, 00:46:40.647 "write": true, 00:46:40.647 "write_zeroes": true, 00:46:40.647 "zcopy": true, 00:46:40.647 "zone_append": false, 00:46:40.647 "zone_management": false 00:46:40.647 }, 00:46:40.647 "uuid": "9301a645-726a-413f-827b-64c588860445", 00:46:40.647 "zoned": false 00:46:40.647 } 00:46:40.647 ]' 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.647 [2024-07-22 10:41:48.524095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:46:40.647 [2024-07-22 10:41:48.524135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:40.647 [2024-07-22 10:41:48.524147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e0c370 00:46:40.647 [2024-07-22 10:41:48.524155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:40.647 [2024-07-22 10:41:48.525098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:40.647 [2024-07-22 10:41:48.525128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:46:40.647 Passthru0 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:46:40.647 { 00:46:40.647 "aliases": [ 00:46:40.647 "9301a645-726a-413f-827b-64c588860445" 00:46:40.647 ], 00:46:40.647 "assigned_rate_limits": { 00:46:40.647 "r_mbytes_per_sec": 0, 00:46:40.647 "rw_ios_per_sec": 0, 00:46:40.647 "rw_mbytes_per_sec": 0, 00:46:40.647 "w_mbytes_per_sec": 0 00:46:40.647 }, 00:46:40.647 "block_size": 512, 00:46:40.647 "claim_type": "exclusive_write", 00:46:40.647 "claimed": true, 00:46:40.647 "driver_specific": {}, 00:46:40.647 "memory_domains": [ 00:46:40.647 { 00:46:40.647 "dma_device_id": "system", 00:46:40.647 "dma_device_type": 1 00:46:40.647 }, 00:46:40.647 { 00:46:40.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:40.647 "dma_device_type": 2 00:46:40.647 } 00:46:40.647 ], 00:46:40.647 "name": "Malloc3", 00:46:40.647 "num_blocks": 16384, 00:46:40.647 "product_name": "Malloc disk", 00:46:40.647 "supported_io_types": { 00:46:40.647 "abort": true, 00:46:40.647 "compare": false, 00:46:40.647 "compare_and_write": false, 00:46:40.647 "copy": true, 00:46:40.647 "flush": true, 00:46:40.647 "get_zone_info": false, 00:46:40.647 "nvme_admin": false, 00:46:40.647 "nvme_io": false, 00:46:40.647 "nvme_io_md": false, 00:46:40.647 "nvme_iov_md": false, 00:46:40.647 "read": true, 00:46:40.647 "reset": true, 00:46:40.647 "seek_data": false, 00:46:40.647 "seek_hole": false, 00:46:40.647 "unmap": true, 00:46:40.647 "write": true, 00:46:40.647 "write_zeroes": true, 00:46:40.647 "zcopy": true, 00:46:40.647 "zone_append": false, 00:46:40.647 "zone_management": false 00:46:40.647 }, 00:46:40.647 "uuid": "9301a645-726a-413f-827b-64c588860445", 00:46:40.647 "zoned": false 00:46:40.647 }, 00:46:40.647 { 00:46:40.647 "aliases": [ 00:46:40.647 "ce9032f9-109e-5ac5-a65a-d1111098108e" 00:46:40.647 ], 00:46:40.647 "assigned_rate_limits": { 00:46:40.647 "r_mbytes_per_sec": 0, 00:46:40.647 "rw_ios_per_sec": 0, 00:46:40.647 "rw_mbytes_per_sec": 0, 00:46:40.647 "w_mbytes_per_sec": 0 00:46:40.647 }, 00:46:40.647 "block_size": 512, 00:46:40.647 "claimed": false, 00:46:40.647 "driver_specific": { 00:46:40.647 "passthru": { 00:46:40.647 "base_bdev_name": "Malloc3", 00:46:40.647 "name": "Passthru0" 00:46:40.647 } 00:46:40.647 }, 00:46:40.647 "memory_domains": [ 00:46:40.647 { 00:46:40.647 "dma_device_id": "system", 00:46:40.647 "dma_device_type": 1 00:46:40.647 }, 00:46:40.647 { 00:46:40.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:40.647 "dma_device_type": 2 00:46:40.647 } 00:46:40.647 ], 00:46:40.647 "name": "Passthru0", 00:46:40.647 "num_blocks": 16384, 00:46:40.647 "product_name": "passthru", 00:46:40.647 "supported_io_types": { 00:46:40.647 "abort": true, 00:46:40.647 "compare": false, 00:46:40.647 "compare_and_write": false, 00:46:40.647 "copy": true, 00:46:40.647 "flush": true, 00:46:40.647 "get_zone_info": false, 00:46:40.647 "nvme_admin": false, 00:46:40.647 "nvme_io": false, 00:46:40.647 "nvme_io_md": false, 00:46:40.647 "nvme_iov_md": false, 00:46:40.647 "read": true, 00:46:40.647 "reset": true, 00:46:40.647 "seek_data": false, 00:46:40.647 "seek_hole": false, 00:46:40.647 "unmap": true, 00:46:40.647 "write": true, 00:46:40.647 "write_zeroes": true, 00:46:40.647 "zcopy": true, 00:46:40.647 "zone_append": false, 00:46:40.647 "zone_management": false 00:46:40.647 }, 00:46:40.647 "uuid": "ce9032f9-109e-5ac5-a65a-d1111098108e", 00:46:40.647 "zoned": false 00:46:40.647 } 00:46:40.647 ]' 00:46:40.647 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:46:40.906 00:46:40.906 real 0m0.315s 00:46:40.906 user 0m0.183s 00:46:40.906 sys 0m0.061s 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:40.906 ************************************ 00:46:40.906 END TEST rpc_daemon_integrity 00:46:40.906 ************************************ 00:46:40.906 10:41:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:40.906 10:41:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:40.906 10:41:48 rpc -- rpc/rpc.sh@84 -- # killprocess 73786 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@948 -- # '[' -z 73786 ']' 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@952 -- # kill -0 73786 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@953 -- # uname 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73786 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:40.906 killing process with pid 73786 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73786' 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@967 -- # kill 73786 00:46:40.906 10:41:48 rpc -- common/autotest_common.sh@972 -- # wait 73786 00:46:41.164 00:46:41.164 real 0m2.814s 00:46:41.164 user 0m3.508s 00:46:41.164 sys 0m0.854s 00:46:41.164 10:41:49 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:41.164 10:41:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:46:41.164 ************************************ 00:46:41.164 END TEST rpc 00:46:41.164 ************************************ 00:46:41.422 10:41:49 -- common/autotest_common.sh@1142 -- # return 0 00:46:41.422 10:41:49 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:46:41.422 10:41:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:41.422 10:41:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:41.422 10:41:49 -- common/autotest_common.sh@10 -- # set +x 00:46:41.422 ************************************ 00:46:41.422 START TEST skip_rpc 00:46:41.422 ************************************ 00:46:41.422 10:41:49 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:46:41.422 * Looking for test storage... 00:46:41.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:46:41.422 10:41:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:46:41.422 10:41:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:46:41.422 10:41:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:46:41.422 10:41:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:41.422 10:41:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:41.422 10:41:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:41.422 ************************************ 00:46:41.422 START TEST skip_rpc 00:46:41.422 ************************************ 00:46:41.422 10:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:46:41.422 10:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74047 00:46:41.422 10:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:46:41.422 10:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:46:41.422 10:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:46:41.422 [2024-07-22 10:41:49.332388] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:46:41.422 [2024-07-22 10:41:49.332471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74047 ] 00:46:41.679 [2024-07-22 10:41:49.451118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:41.679 [2024-07-22 10:41:49.476518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.679 [2024-07-22 10:41:49.518415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:46.945 2024/07/22 10:41:54 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74047 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 74047 ']' 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 74047 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74047 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74047' 00:46:46.945 killing process with pid 74047 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 74047 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 74047 00:46:46.945 00:46:46.945 real 0m5.359s 00:46:46.945 user 0m5.034s 00:46:46.945 sys 0m0.245s 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:46.945 ************************************ 00:46:46.945 END TEST skip_rpc 00:46:46.945 ************************************ 00:46:46.945 10:41:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:46.945 10:41:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:46.945 10:41:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:46:46.945 10:41:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:46.945 10:41:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:46.945 10:41:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:46.945 ************************************ 00:46:46.945 START TEST skip_rpc_with_json 00:46:46.945 ************************************ 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74134 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74134 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 74134 ']' 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:46.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:46.945 10:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:46:46.945 [2024-07-22 10:41:54.770556] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:46:46.945 [2024-07-22 10:41:54.770639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74134 ] 00:46:47.205 [2024-07-22 10:41:54.887490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:47.205 [2024-07-22 10:41:54.912838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:47.205 [2024-07-22 10:41:54.955867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:46:47.780 [2024-07-22 10:41:55.609520] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:46:47.780 2024/07/22 10:41:55 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:46:47.780 request: 00:46:47.780 { 00:46:47.780 "method": "nvmf_get_transports", 00:46:47.780 "params": { 00:46:47.780 "trtype": "tcp" 00:46:47.780 } 00:46:47.780 } 00:46:47.780 Got JSON-RPC error response 00:46:47.780 GoRPCClient: error on JSON-RPC call 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:46:47.780 [2024-07-22 10:41:55.621576] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:47.780 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:46:48.039 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:48.039 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:46:48.039 { 00:46:48.039 "subsystems": [ 00:46:48.039 { 00:46:48.039 "subsystem": "keyring", 00:46:48.039 "config": [] 00:46:48.039 }, 00:46:48.039 { 00:46:48.039 "subsystem": "iobuf", 00:46:48.039 "config": [ 00:46:48.039 { 00:46:48.039 "method": "iobuf_set_options", 00:46:48.039 "params": { 00:46:48.039 "large_bufsize": 135168, 00:46:48.039 "large_pool_count": 1024, 00:46:48.039 "small_bufsize": 8192, 00:46:48.039 "small_pool_count": 8192 00:46:48.039 } 00:46:48.039 } 00:46:48.039 ] 00:46:48.039 }, 00:46:48.039 { 00:46:48.039 "subsystem": "sock", 00:46:48.039 "config": [ 00:46:48.039 { 00:46:48.039 "method": "sock_set_default_impl", 00:46:48.039 "params": { 00:46:48.039 "impl_name": "posix" 00:46:48.039 } 00:46:48.039 }, 00:46:48.039 { 00:46:48.039 "method": "sock_impl_set_options", 00:46:48.039 "params": { 00:46:48.039 "enable_ktls": false, 00:46:48.039 "enable_placement_id": 0, 00:46:48.039 "enable_quickack": false, 00:46:48.039 "enable_recv_pipe": true, 00:46:48.039 "enable_zerocopy_send_client": false, 00:46:48.039 "enable_zerocopy_send_server": true, 00:46:48.039 "impl_name": "ssl", 00:46:48.039 "recv_buf_size": 4096, 00:46:48.039 "send_buf_size": 4096, 00:46:48.039 "tls_version": 0, 00:46:48.039 "zerocopy_threshold": 0 00:46:48.039 } 00:46:48.039 }, 00:46:48.039 { 00:46:48.039 "method": "sock_impl_set_options", 00:46:48.039 "params": { 00:46:48.039 "enable_ktls": false, 00:46:48.039 "enable_placement_id": 0, 00:46:48.039 "enable_quickack": false, 00:46:48.039 "enable_recv_pipe": true, 00:46:48.039 "enable_zerocopy_send_client": false, 00:46:48.039 "enable_zerocopy_send_server": true, 00:46:48.040 "impl_name": "posix", 00:46:48.040 "recv_buf_size": 2097152, 00:46:48.040 "send_buf_size": 2097152, 00:46:48.040 "tls_version": 0, 00:46:48.040 "zerocopy_threshold": 0 00:46:48.040 } 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "vmd", 00:46:48.040 "config": [] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "accel", 00:46:48.040 "config": [ 00:46:48.040 { 00:46:48.040 "method": "accel_set_options", 00:46:48.040 "params": { 00:46:48.040 "buf_count": 2048, 00:46:48.040 "large_cache_size": 16, 00:46:48.040 "sequence_count": 2048, 00:46:48.040 "small_cache_size": 128, 00:46:48.040 "task_count": 2048 00:46:48.040 } 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "bdev", 00:46:48.040 "config": [ 00:46:48.040 { 00:46:48.040 "method": "bdev_set_options", 00:46:48.040 "params": { 00:46:48.040 "bdev_auto_examine": true, 00:46:48.040 "bdev_io_cache_size": 256, 00:46:48.040 "bdev_io_pool_size": 65535, 00:46:48.040 "iobuf_large_cache_size": 16, 00:46:48.040 "iobuf_small_cache_size": 128 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "bdev_raid_set_options", 00:46:48.040 "params": { 00:46:48.040 "process_max_bandwidth_mb_sec": 0, 00:46:48.040 "process_window_size_kb": 1024 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "bdev_iscsi_set_options", 00:46:48.040 "params": { 00:46:48.040 "timeout_sec": 30 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "bdev_nvme_set_options", 00:46:48.040 "params": { 00:46:48.040 "action_on_timeout": "none", 00:46:48.040 "allow_accel_sequence": false, 00:46:48.040 "arbitration_burst": 0, 00:46:48.040 "bdev_retry_count": 3, 00:46:48.040 "ctrlr_loss_timeout_sec": 0, 00:46:48.040 "delay_cmd_submit": true, 00:46:48.040 "dhchap_dhgroups": [ 00:46:48.040 "null", 00:46:48.040 "ffdhe2048", 00:46:48.040 "ffdhe3072", 00:46:48.040 "ffdhe4096", 00:46:48.040 "ffdhe6144", 00:46:48.040 "ffdhe8192" 00:46:48.040 ], 00:46:48.040 "dhchap_digests": [ 00:46:48.040 "sha256", 00:46:48.040 "sha384", 00:46:48.040 "sha512" 00:46:48.040 ], 00:46:48.040 "disable_auto_failback": false, 00:46:48.040 "fast_io_fail_timeout_sec": 0, 00:46:48.040 "generate_uuids": false, 00:46:48.040 "high_priority_weight": 0, 00:46:48.040 "io_path_stat": false, 00:46:48.040 "io_queue_requests": 0, 00:46:48.040 "keep_alive_timeout_ms": 10000, 00:46:48.040 "low_priority_weight": 0, 00:46:48.040 "medium_priority_weight": 0, 00:46:48.040 "nvme_adminq_poll_period_us": 10000, 00:46:48.040 "nvme_error_stat": false, 00:46:48.040 "nvme_ioq_poll_period_us": 0, 00:46:48.040 "rdma_cm_event_timeout_ms": 0, 00:46:48.040 "rdma_max_cq_size": 0, 00:46:48.040 "rdma_srq_size": 0, 00:46:48.040 "reconnect_delay_sec": 0, 00:46:48.040 "timeout_admin_us": 0, 00:46:48.040 "timeout_us": 0, 00:46:48.040 "transport_ack_timeout": 0, 00:46:48.040 "transport_retry_count": 4, 00:46:48.040 "transport_tos": 0 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "bdev_nvme_set_hotplug", 00:46:48.040 "params": { 00:46:48.040 "enable": false, 00:46:48.040 "period_us": 100000 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "bdev_wait_for_examine" 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "scsi", 00:46:48.040 "config": null 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "scheduler", 00:46:48.040 "config": [ 00:46:48.040 { 00:46:48.040 "method": "framework_set_scheduler", 00:46:48.040 "params": { 00:46:48.040 "name": "static" 00:46:48.040 } 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "vhost_scsi", 00:46:48.040 "config": [] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "vhost_blk", 00:46:48.040 "config": [] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "ublk", 00:46:48.040 "config": [] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "nbd", 00:46:48.040 "config": [] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "nvmf", 00:46:48.040 "config": [ 00:46:48.040 { 00:46:48.040 "method": "nvmf_set_config", 00:46:48.040 "params": { 00:46:48.040 "admin_cmd_passthru": { 00:46:48.040 "identify_ctrlr": false 00:46:48.040 }, 00:46:48.040 "discovery_filter": "match_any" 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "nvmf_set_max_subsystems", 00:46:48.040 "params": { 00:46:48.040 "max_subsystems": 1024 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "nvmf_set_crdt", 00:46:48.040 "params": { 00:46:48.040 "crdt1": 0, 00:46:48.040 "crdt2": 0, 00:46:48.040 "crdt3": 0 00:46:48.040 } 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "method": "nvmf_create_transport", 00:46:48.040 "params": { 00:46:48.040 "abort_timeout_sec": 1, 00:46:48.040 "ack_timeout": 0, 00:46:48.040 "buf_cache_size": 4294967295, 00:46:48.040 "c2h_success": true, 00:46:48.040 "data_wr_pool_size": 0, 00:46:48.040 "dif_insert_or_strip": false, 00:46:48.040 "in_capsule_data_size": 4096, 00:46:48.040 "io_unit_size": 131072, 00:46:48.040 "max_aq_depth": 128, 00:46:48.040 "max_io_qpairs_per_ctrlr": 127, 00:46:48.040 "max_io_size": 131072, 00:46:48.040 "max_queue_depth": 128, 00:46:48.040 "num_shared_buffers": 511, 00:46:48.040 "sock_priority": 0, 00:46:48.040 "trtype": "TCP", 00:46:48.040 "zcopy": false 00:46:48.040 } 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 }, 00:46:48.040 { 00:46:48.040 "subsystem": "iscsi", 00:46:48.040 "config": [ 00:46:48.040 { 00:46:48.040 "method": "iscsi_set_options", 00:46:48.040 "params": { 00:46:48.040 "allow_duplicated_isid": false, 00:46:48.040 "chap_group": 0, 00:46:48.040 "data_out_pool_size": 2048, 00:46:48.040 "default_time2retain": 20, 00:46:48.040 "default_time2wait": 2, 00:46:48.040 "disable_chap": false, 00:46:48.040 "error_recovery_level": 0, 00:46:48.040 "first_burst_length": 8192, 00:46:48.040 "immediate_data": true, 00:46:48.040 "immediate_data_pool_size": 16384, 00:46:48.040 "max_connections_per_session": 2, 00:46:48.040 "max_large_datain_per_connection": 64, 00:46:48.040 "max_queue_depth": 64, 00:46:48.040 "max_r2t_per_connection": 4, 00:46:48.040 "max_sessions": 128, 00:46:48.040 "mutual_chap": false, 00:46:48.040 "node_base": "iqn.2016-06.io.spdk", 00:46:48.040 "nop_in_interval": 30, 00:46:48.040 "nop_timeout": 60, 00:46:48.040 "pdu_pool_size": 36864, 00:46:48.040 "require_chap": false 00:46:48.040 } 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 } 00:46:48.040 ] 00:46:48.040 } 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74134 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74134 ']' 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74134 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74134 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74134' 00:46:48.040 killing process with pid 74134 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74134 00:46:48.040 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74134 00:46:48.300 10:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74174 00:46:48.300 10:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:46:48.300 10:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74174 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74174 ']' 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74174 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74174 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:53.569 killing process with pid 74174 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74174' 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74174 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74174 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:46:53.569 00:46:53.569 real 0m6.777s 00:46:53.569 user 0m6.457s 00:46:53.569 sys 0m0.591s 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:53.569 10:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:46:53.569 ************************************ 00:46:53.569 END TEST skip_rpc_with_json 00:46:53.569 ************************************ 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:53.827 10:42:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:53.827 ************************************ 00:46:53.827 START TEST skip_rpc_with_delay 00:46:53.827 ************************************ 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:46:53.827 [2024-07-22 10:42:01.629044] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:46:53.827 [2024-07-22 10:42:01.629132] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:46:53.827 ************************************ 00:46:53.827 END TEST skip_rpc_with_delay 00:46:53.827 ************************************ 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:46:53.827 00:46:53.827 real 0m0.076s 00:46:53.827 user 0m0.037s 00:46:53.827 sys 0m0.038s 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:53.827 10:42:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:53.827 10:42:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:46:53.827 10:42:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:46:53.827 10:42:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:53.827 10:42:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:53.827 ************************************ 00:46:53.827 START TEST exit_on_failed_rpc_init 00:46:53.827 ************************************ 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74283 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74283 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 74283 ']' 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:53.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:53.827 10:42:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:46:54.085 [2024-07-22 10:42:01.774021] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:46:54.085 [2024-07-22 10:42:01.774087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74283 ] 00:46:54.085 [2024-07-22 10:42:01.890932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:54.085 [2024-07-22 10:42:01.915143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:54.085 [2024-07-22 10:42:01.955820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:46:55.028 [2024-07-22 10:42:02.667516] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:46:55.028 [2024-07-22 10:42:02.667603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74313 ] 00:46:55.028 [2024-07-22 10:42:02.783939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:55.028 [2024-07-22 10:42:02.809051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:55.028 [2024-07-22 10:42:02.848383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:55.028 [2024-07-22 10:42:02.848482] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:55.028 [2024-07-22 10:42:02.848493] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:46:55.028 [2024-07-22 10:42:02.848501] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74283 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 74283 ']' 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 74283 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:55.028 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74283 00:46:55.286 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:55.286 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:55.286 killing process with pid 74283 00:46:55.286 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74283' 00:46:55.286 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 74283 00:46:55.286 10:42:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 74283 00:46:55.545 00:46:55.545 real 0m1.546s 00:46:55.545 user 0m1.667s 00:46:55.545 sys 0m0.398s 00:46:55.545 10:42:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:55.545 10:42:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:46:55.545 ************************************ 00:46:55.545 END TEST exit_on_failed_rpc_init 00:46:55.545 ************************************ 00:46:55.545 10:42:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:46:55.545 10:42:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:46:55.545 00:46:55.545 real 0m14.186s 00:46:55.545 user 0m13.321s 00:46:55.545 sys 0m1.565s 00:46:55.545 10:42:03 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:55.545 10:42:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.545 ************************************ 00:46:55.545 END TEST skip_rpc 00:46:55.545 ************************************ 00:46:55.545 10:42:03 -- common/autotest_common.sh@1142 -- # return 0 00:46:55.545 10:42:03 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:46:55.545 10:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:55.545 10:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:55.545 10:42:03 -- common/autotest_common.sh@10 -- # set +x 00:46:55.545 ************************************ 00:46:55.545 START TEST rpc_client 00:46:55.545 ************************************ 00:46:55.545 10:42:03 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:46:55.804 * Looking for test storage... 00:46:55.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:46:55.804 10:42:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:46:55.804 OK 00:46:55.804 10:42:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:46:55.804 00:46:55.804 real 0m0.147s 00:46:55.804 user 0m0.060s 00:46:55.804 sys 0m0.096s 00:46:55.804 10:42:03 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:55.804 10:42:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:46:55.804 ************************************ 00:46:55.804 END TEST rpc_client 00:46:55.804 ************************************ 00:46:55.804 10:42:03 -- common/autotest_common.sh@1142 -- # return 0 00:46:55.804 10:42:03 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:46:55.804 10:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:55.804 10:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:55.804 10:42:03 -- common/autotest_common.sh@10 -- # set +x 00:46:55.804 ************************************ 00:46:55.804 START TEST json_config 00:46:55.804 ************************************ 00:46:55.804 10:42:03 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:55.804 10:42:03 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:55.804 10:42:03 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:55.804 10:42:03 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:55.804 10:42:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.804 10:42:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.804 10:42:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.804 10:42:03 json_config -- paths/export.sh@5 -- # export PATH 00:46:55.804 10:42:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@47 -- # : 0 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:55.804 10:42:03 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:46:55.804 10:42:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:46:56.063 INFO: JSON configuration test init 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:46:56.063 10:42:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:56.063 10:42:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:46:56.063 10:42:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:56.063 10:42:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:56.063 10:42:03 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:46:56.063 10:42:03 json_config -- json_config/common.sh@9 -- # local app=target 00:46:56.063 10:42:03 json_config -- json_config/common.sh@10 -- # shift 00:46:56.063 10:42:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:46:56.063 10:42:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:46:56.063 10:42:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:46:56.063 10:42:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:46:56.063 10:42:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:46:56.063 10:42:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=74431 00:46:56.063 Waiting for target to run... 00:46:56.063 10:42:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:46:56.064 10:42:03 json_config -- json_config/common.sh@25 -- # waitforlisten 74431 /var/tmp/spdk_tgt.sock 00:46:56.064 10:42:03 json_config -- common/autotest_common.sh@829 -- # '[' -z 74431 ']' 00:46:56.064 10:42:03 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:46:56.064 10:42:03 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:46:56.064 10:42:03 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:56.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:46:56.064 10:42:03 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:46:56.064 10:42:03 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:56.064 10:42:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:56.064 [2024-07-22 10:42:03.804937] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:46:56.064 [2024-07-22 10:42:03.805008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74431 ] 00:46:56.322 [2024-07-22 10:42:04.128991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:56.322 [2024-07-22 10:42:04.154429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:56.322 [2024-07-22 10:42:04.179184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:56.890 10:42:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:56.890 00:46:56.890 10:42:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:46:56.890 10:42:04 json_config -- json_config/common.sh@26 -- # echo '' 00:46:56.890 10:42:04 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:46:56.890 10:42:04 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:46:56.890 10:42:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:56.890 10:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:56.890 10:42:04 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:46:56.890 10:42:04 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:46:56.891 10:42:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:56.891 10:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:56.891 10:42:04 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:46:56.891 10:42:04 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:46:56.891 10:42:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:46:57.457 10:42:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:57.457 10:42:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:46:57.457 10:42:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@51 -- # sort 00:46:57.457 10:42:05 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:46:57.458 10:42:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:57.458 10:42:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@59 -- # return 0 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:46:57.458 10:42:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:57.458 10:42:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:46:57.458 10:42:05 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:46:57.458 10:42:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:46:57.716 MallocForNvmf0 00:46:57.716 10:42:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:46:57.716 10:42:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:46:57.975 MallocForNvmf1 00:46:57.975 10:42:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:46:57.975 10:42:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:46:58.233 [2024-07-22 10:42:05.936360] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:58.233 10:42:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:58.233 10:42:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:58.233 10:42:06 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:46:58.234 10:42:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:46:58.491 10:42:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:46:58.491 10:42:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:46:58.749 10:42:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:46:58.749 10:42:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:46:58.749 [2024-07-22 10:42:06.668415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:59.007 10:42:06 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:46:59.007 10:42:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:59.007 10:42:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:59.007 10:42:06 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:46:59.007 10:42:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:59.007 10:42:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:59.007 10:42:06 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:46:59.007 10:42:06 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:46:59.007 10:42:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:46:59.266 MallocBdevForConfigChangeCheck 00:46:59.266 10:42:06 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:46:59.266 10:42:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:59.266 10:42:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:46:59.266 10:42:07 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:46:59.266 10:42:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:46:59.525 INFO: shutting down applications... 00:46:59.525 10:42:07 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:46:59.525 10:42:07 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:46:59.525 10:42:07 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:46:59.525 10:42:07 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:46:59.525 10:42:07 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:46:59.783 Calling clear_iscsi_subsystem 00:46:59.783 Calling clear_nvmf_subsystem 00:46:59.783 Calling clear_nbd_subsystem 00:46:59.783 Calling clear_ublk_subsystem 00:46:59.783 Calling clear_vhost_blk_subsystem 00:46:59.783 Calling clear_vhost_scsi_subsystem 00:46:59.783 Calling clear_bdev_subsystem 00:46:59.783 10:42:07 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:46:59.783 10:42:07 json_config -- json_config/json_config.sh@347 -- # count=100 00:46:59.783 10:42:07 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:46:59.783 10:42:07 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:46:59.783 10:42:07 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:46:59.783 10:42:07 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:47:00.362 10:42:08 json_config -- json_config/json_config.sh@349 -- # break 00:47:00.362 10:42:08 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:47:00.362 10:42:08 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:47:00.362 10:42:08 json_config -- json_config/common.sh@31 -- # local app=target 00:47:00.362 10:42:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:47:00.362 10:42:08 json_config -- json_config/common.sh@35 -- # [[ -n 74431 ]] 00:47:00.362 10:42:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 74431 00:47:00.362 10:42:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:47:00.362 10:42:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:47:00.362 10:42:08 json_config -- json_config/common.sh@41 -- # kill -0 74431 00:47:00.362 10:42:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:47:00.620 10:42:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:47:00.620 10:42:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:47:00.620 10:42:08 json_config -- json_config/common.sh@41 -- # kill -0 74431 00:47:00.620 10:42:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:47:00.620 10:42:08 json_config -- json_config/common.sh@43 -- # break 00:47:00.620 SPDK target shutdown done 00:47:00.620 10:42:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:47:00.620 10:42:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:47:00.620 INFO: relaunching applications... 00:47:00.620 10:42:08 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:47:00.620 10:42:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:00.620 10:42:08 json_config -- json_config/common.sh@9 -- # local app=target 00:47:00.620 10:42:08 json_config -- json_config/common.sh@10 -- # shift 00:47:00.620 10:42:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:47:00.620 10:42:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:47:00.620 10:42:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:47:00.620 10:42:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:47:00.620 10:42:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:47:00.620 10:42:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=74700 00:47:00.620 Waiting for target to run... 00:47:00.620 10:42:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:47:00.620 10:42:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:00.620 10:42:08 json_config -- json_config/common.sh@25 -- # waitforlisten 74700 /var/tmp/spdk_tgt.sock 00:47:00.620 10:42:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 74700 ']' 00:47:00.620 10:42:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:47:00.620 10:42:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:00.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:47:00.620 10:42:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:47:00.620 10:42:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:00.620 10:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:47:00.878 [2024-07-22 10:42:08.573334] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:00.878 [2024-07-22 10:42:08.573417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74700 ] 00:47:01.136 [2024-07-22 10:42:08.902556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:01.136 [2024-07-22 10:42:08.926711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:01.136 [2024-07-22 10:42:08.958920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:01.394 [2024-07-22 10:42:09.258400] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:01.394 [2024-07-22 10:42:09.290395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:01.656 10:42:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:01.656 10:42:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:47:01.656 00:47:01.656 10:42:09 json_config -- json_config/common.sh@26 -- # echo '' 00:47:01.656 10:42:09 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:47:01.656 INFO: Checking if target configuration is the same... 00:47:01.656 10:42:09 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:47:01.656 10:42:09 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:01.656 10:42:09 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:47:01.656 10:42:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:47:01.656 + '[' 2 -ne 2 ']' 00:47:01.656 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:47:01.656 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:47:01.656 + rootdir=/home/vagrant/spdk_repo/spdk 00:47:01.656 +++ basename /dev/fd/62 00:47:01.656 ++ mktemp /tmp/62.XXX 00:47:01.656 + tmp_file_1=/tmp/62.vOY 00:47:01.656 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:01.656 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:47:01.656 + tmp_file_2=/tmp/spdk_tgt_config.json.JUF 00:47:01.656 + ret=0 00:47:01.656 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:47:01.914 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:47:01.914 + diff -u /tmp/62.vOY /tmp/spdk_tgt_config.json.JUF 00:47:01.914 INFO: JSON config files are the same 00:47:01.914 + echo 'INFO: JSON config files are the same' 00:47:01.914 + rm /tmp/62.vOY /tmp/spdk_tgt_config.json.JUF 00:47:01.914 + exit 0 00:47:01.914 10:42:09 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:47:01.914 INFO: changing configuration and checking if this can be detected... 00:47:01.914 10:42:09 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:47:01.914 10:42:09 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:47:01.914 10:42:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:47:02.172 10:42:10 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:02.172 10:42:10 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:47:02.172 10:42:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:47:02.172 + '[' 2 -ne 2 ']' 00:47:02.172 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:47:02.172 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:47:02.172 + rootdir=/home/vagrant/spdk_repo/spdk 00:47:02.172 +++ basename /dev/fd/62 00:47:02.172 ++ mktemp /tmp/62.XXX 00:47:02.172 + tmp_file_1=/tmp/62.QTd 00:47:02.172 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:02.172 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:47:02.172 + tmp_file_2=/tmp/spdk_tgt_config.json.oYJ 00:47:02.172 + ret=0 00:47:02.172 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:47:02.431 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:47:02.689 + diff -u /tmp/62.QTd /tmp/spdk_tgt_config.json.oYJ 00:47:02.689 + ret=1 00:47:02.689 + echo '=== Start of file: /tmp/62.QTd ===' 00:47:02.689 + cat /tmp/62.QTd 00:47:02.689 + echo '=== End of file: /tmp/62.QTd ===' 00:47:02.689 + echo '' 00:47:02.689 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oYJ ===' 00:47:02.689 + cat /tmp/spdk_tgt_config.json.oYJ 00:47:02.689 + echo '=== End of file: /tmp/spdk_tgt_config.json.oYJ ===' 00:47:02.689 + echo '' 00:47:02.689 + rm /tmp/62.QTd /tmp/spdk_tgt_config.json.oYJ 00:47:02.689 + exit 1 00:47:02.689 INFO: configuration change detected. 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@321 -- # [[ -n 74700 ]] 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@197 -- # uname -s 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:47:02.689 10:42:10 json_config -- json_config/json_config.sh@327 -- # killprocess 74700 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@948 -- # '[' -z 74700 ']' 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@952 -- # kill -0 74700 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@953 -- # uname 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74700 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:02.689 killing process with pid 74700 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74700' 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@967 -- # kill 74700 00:47:02.689 10:42:10 json_config -- common/autotest_common.sh@972 -- # wait 74700 00:47:02.947 10:42:10 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:47:02.947 10:42:10 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:47:02.947 10:42:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:02.947 10:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:47:02.947 10:42:10 json_config -- json_config/json_config.sh@332 -- # return 0 00:47:02.947 INFO: Success 00:47:02.947 10:42:10 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:47:02.947 ************************************ 00:47:02.947 END TEST json_config 00:47:02.947 ************************************ 00:47:02.947 00:47:02.947 real 0m7.169s 00:47:02.947 user 0m9.540s 00:47:02.947 sys 0m1.947s 00:47:02.947 10:42:10 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:02.947 10:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:47:02.947 10:42:10 -- common/autotest_common.sh@1142 -- # return 0 00:47:02.947 10:42:10 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:47:02.947 10:42:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:02.947 10:42:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:02.947 10:42:10 -- common/autotest_common.sh@10 -- # set +x 00:47:02.947 ************************************ 00:47:02.947 START TEST json_config_extra_key 00:47:02.947 ************************************ 00:47:02.947 10:42:10 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:47:03.205 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:03.205 10:42:10 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:03.205 10:42:10 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:03.205 10:42:10 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:03.205 10:42:10 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:03.205 10:42:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.206 10:42:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.206 10:42:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.206 10:42:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:47:03.206 10:42:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:03.206 10:42:10 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:47:03.206 INFO: launching applications... 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:47:03.206 10:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74865 00:47:03.206 Waiting for target to run... 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74865 /var/tmp/spdk_tgt.sock 00:47:03.206 10:42:10 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 74865 ']' 00:47:03.206 10:42:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:47:03.206 10:42:10 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:47:03.206 10:42:10 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:03.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:47:03.206 10:42:10 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:47:03.206 10:42:10 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:03.206 10:42:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:47:03.206 [2024-07-22 10:42:11.046482] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:03.206 [2024-07-22 10:42:11.046552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74865 ] 00:47:03.464 [2024-07-22 10:42:11.387010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:03.722 [2024-07-22 10:42:11.411199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:03.722 [2024-07-22 10:42:11.442505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:03.981 10:42:11 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:03.981 00:47:03.981 10:42:11 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:47:03.981 INFO: shutting down applications... 00:47:03.981 10:42:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:47:03.981 10:42:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74865 ]] 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74865 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74865 00:47:03.981 10:42:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74865 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:47:04.548 SPDK target shutdown done 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:47:04.548 10:42:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:47:04.548 Success 00:47:04.548 10:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:47:04.548 00:47:04.548 real 0m1.530s 00:47:04.548 user 0m1.245s 00:47:04.548 sys 0m0.395s 00:47:04.548 10:42:12 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:04.548 ************************************ 00:47:04.548 END TEST json_config_extra_key 00:47:04.548 ************************************ 00:47:04.548 10:42:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:47:04.548 10:42:12 -- common/autotest_common.sh@1142 -- # return 0 00:47:04.548 10:42:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:47:04.548 10:42:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:04.548 10:42:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:04.548 10:42:12 -- common/autotest_common.sh@10 -- # set +x 00:47:04.548 ************************************ 00:47:04.548 START TEST alias_rpc 00:47:04.548 ************************************ 00:47:04.548 10:42:12 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:47:04.807 * Looking for test storage... 00:47:04.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:47:04.807 10:42:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:47:04.807 10:42:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74947 00:47:04.807 10:42:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74947 00:47:04.807 10:42:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:04.807 10:42:12 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 74947 ']' 00:47:04.807 10:42:12 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:04.807 10:42:12 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:04.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:04.807 10:42:12 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:04.807 10:42:12 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:04.807 10:42:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:04.807 [2024-07-22 10:42:12.656162] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:04.807 [2024-07-22 10:42:12.656237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74947 ] 00:47:05.066 [2024-07-22 10:42:12.773091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:05.066 [2024-07-22 10:42:12.791305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:05.066 [2024-07-22 10:42:12.831806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:05.633 10:42:13 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:05.633 10:42:13 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:05.633 10:42:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:47:05.891 10:42:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74947 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 74947 ']' 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 74947 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74947 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:05.891 killing process with pid 74947 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74947' 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@967 -- # kill 74947 00:47:05.891 10:42:13 alias_rpc -- common/autotest_common.sh@972 -- # wait 74947 00:47:06.153 00:47:06.153 real 0m1.571s 00:47:06.153 user 0m1.604s 00:47:06.153 sys 0m0.461s 00:47:06.153 10:42:14 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:06.153 10:42:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:06.153 ************************************ 00:47:06.153 END TEST alias_rpc 00:47:06.153 ************************************ 00:47:06.411 10:42:14 -- common/autotest_common.sh@1142 -- # return 0 00:47:06.411 10:42:14 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:47:06.411 10:42:14 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:47:06.411 10:42:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:06.411 10:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:06.411 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:47:06.411 ************************************ 00:47:06.411 START TEST dpdk_mem_utility 00:47:06.411 ************************************ 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:47:06.411 * Looking for test storage... 00:47:06.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:47:06.411 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:47:06.411 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75033 00:47:06.411 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:06.411 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75033 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 75033 ']' 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:06.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:06.411 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:47:06.411 [2024-07-22 10:42:14.299447] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:06.411 [2024-07-22 10:42:14.299521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75033 ] 00:47:06.669 [2024-07-22 10:42:14.417887] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:06.669 [2024-07-22 10:42:14.441486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:06.669 [2024-07-22 10:42:14.482206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:07.235 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:07.235 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:47:07.235 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:47:07.235 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:47:07.235 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.235 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:47:07.235 { 00:47:07.235 "filename": "/tmp/spdk_mem_dump.txt" 00:47:07.235 } 00:47:07.235 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.235 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:47:07.495 DPDK memory size 814.000000 MiB in 1 heap(s) 00:47:07.495 1 heaps totaling size 814.000000 MiB 00:47:07.495 size: 814.000000 MiB heap id: 0 00:47:07.495 end heaps---------- 00:47:07.495 8 mempools totaling size 598.116089 MiB 00:47:07.495 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:47:07.495 size: 158.602051 MiB name: PDU_data_out_Pool 00:47:07.495 size: 84.521057 MiB name: bdev_io_75033 00:47:07.495 size: 51.011292 MiB name: evtpool_75033 00:47:07.495 size: 50.003479 MiB name: msgpool_75033 00:47:07.495 size: 21.763794 MiB name: PDU_Pool 00:47:07.495 size: 19.513306 MiB name: SCSI_TASK_Pool 00:47:07.495 size: 0.026123 MiB name: Session_Pool 00:47:07.495 end mempools------- 00:47:07.495 6 memzones totaling size 4.142822 MiB 00:47:07.495 size: 1.000366 MiB name: RG_ring_0_75033 00:47:07.495 size: 1.000366 MiB name: RG_ring_1_75033 00:47:07.495 size: 1.000366 MiB name: RG_ring_4_75033 00:47:07.495 size: 1.000366 MiB name: RG_ring_5_75033 00:47:07.495 size: 0.125366 MiB name: RG_ring_2_75033 00:47:07.495 size: 0.015991 MiB name: RG_ring_3_75033 00:47:07.495 end memzones------- 00:47:07.495 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:47:07.495 heap id: 0 total size: 814.000000 MiB number of busy elements: 227 number of free elements: 15 00:47:07.495 list of free elements. size: 12.485291 MiB 00:47:07.495 element at address: 0x200000400000 with size: 1.999512 MiB 00:47:07.495 element at address: 0x200018e00000 with size: 0.999878 MiB 00:47:07.495 element at address: 0x200019000000 with size: 0.999878 MiB 00:47:07.495 element at address: 0x200003e00000 with size: 0.996277 MiB 00:47:07.495 element at address: 0x200031c00000 with size: 0.994446 MiB 00:47:07.495 element at address: 0x200013800000 with size: 0.978699 MiB 00:47:07.495 element at address: 0x200007000000 with size: 0.959839 MiB 00:47:07.495 element at address: 0x200019200000 with size: 0.936584 MiB 00:47:07.495 element at address: 0x200000200000 with size: 0.837036 MiB 00:47:07.495 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:47:07.495 element at address: 0x20000b200000 with size: 0.489807 MiB 00:47:07.495 element at address: 0x200000800000 with size: 0.487061 MiB 00:47:07.495 element at address: 0x200019400000 with size: 0.485657 MiB 00:47:07.495 element at address: 0x200027e00000 with size: 0.397949 MiB 00:47:07.495 element at address: 0x200003a00000 with size: 0.350769 MiB 00:47:07.495 list of standard malloc elements. size: 199.252136 MiB 00:47:07.495 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:47:07.495 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:47:07.495 element at address: 0x200018efff80 with size: 1.000122 MiB 00:47:07.495 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:47:07.495 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:47:07.495 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:47:07.495 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:47:07.495 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:47:07.495 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:47:07.495 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003adb300 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003adb500 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003affa80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003affb40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:47:07.495 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:47:07.496 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:47:07.496 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:47:07.496 list of memzone associated elements. size: 602.262573 MiB 00:47:07.496 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:47:07.496 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:47:07.496 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:47:07.496 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:47:07.496 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:47:07.496 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75033_0 00:47:07.496 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:47:07.496 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75033_0 00:47:07.496 element at address: 0x200003fff380 with size: 48.003052 MiB 00:47:07.496 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75033_0 00:47:07.496 element at address: 0x2000195be940 with size: 20.255554 MiB 00:47:07.496 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:47:07.496 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:47:07.496 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:47:07.496 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:47:07.496 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75033 00:47:07.496 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:47:07.496 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75033 00:47:07.496 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:47:07.496 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75033 00:47:07.496 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:47:07.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:47:07.496 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:47:07.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:47:07.496 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:47:07.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:47:07.496 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:47:07.496 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:47:07.496 element at address: 0x200003eff180 with size: 1.000488 MiB 00:47:07.496 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75033 00:47:07.496 element at address: 0x200003affc00 with size: 1.000488 MiB 00:47:07.496 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75033 00:47:07.496 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:47:07.496 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75033 00:47:07.496 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:47:07.496 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75033 00:47:07.496 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:47:07.496 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75033 00:47:07.496 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:47:07.496 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:47:07.496 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:47:07.496 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:47:07.496 element at address: 0x20001947c540 with size: 0.250488 MiB 00:47:07.496 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:47:07.496 element at address: 0x200003adf880 with size: 0.125488 MiB 00:47:07.496 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75033 00:47:07.496 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:47:07.496 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:47:07.496 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:47:07.496 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:47:07.496 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:47:07.496 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75033 00:47:07.496 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:47:07.496 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:47:07.497 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:47:07.497 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75033 00:47:07.497 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:47:07.497 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75033 00:47:07.497 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:47:07.497 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:47:07.497 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:47:07.497 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75033 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 75033 ']' 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 75033 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75033 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:07.497 killing process with pid 75033 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75033' 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 75033 00:47:07.497 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 75033 00:47:07.756 00:47:07.756 real 0m1.482s 00:47:07.756 user 0m1.484s 00:47:07.756 sys 0m0.415s 00:47:07.756 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:07.756 ************************************ 00:47:07.756 END TEST dpdk_mem_utility 00:47:07.756 ************************************ 00:47:07.757 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:47:07.757 10:42:15 -- common/autotest_common.sh@1142 -- # return 0 00:47:07.757 10:42:15 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:47:07.757 10:42:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:07.757 10:42:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:07.757 10:42:15 -- common/autotest_common.sh@10 -- # set +x 00:47:07.757 ************************************ 00:47:07.757 START TEST event 00:47:07.757 ************************************ 00:47:07.757 10:42:15 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:47:08.017 * Looking for test storage... 00:47:08.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:47:08.017 10:42:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:47:08.017 10:42:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:47:08.017 10:42:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:47:08.017 10:42:15 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:47:08.017 10:42:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:08.017 10:42:15 event -- common/autotest_common.sh@10 -- # set +x 00:47:08.017 ************************************ 00:47:08.017 START TEST event_perf 00:47:08.017 ************************************ 00:47:08.017 10:42:15 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:47:08.017 Running I/O for 1 seconds...[2024-07-22 10:42:15.838378] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:08.017 [2024-07-22 10:42:15.838463] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75123 ] 00:47:08.280 [2024-07-22 10:42:15.958330] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:08.280 [2024-07-22 10:42:15.980770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:08.280 [2024-07-22 10:42:16.026430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:08.280 [2024-07-22 10:42:16.026627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:08.281 Running I/O for 1 seconds...[2024-07-22 10:42:16.028179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:08.281 [2024-07-22 10:42:16.028183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:47:09.233 00:47:09.233 lcore 0: 202184 00:47:09.233 lcore 1: 202183 00:47:09.233 lcore 2: 202184 00:47:09.233 lcore 3: 202184 00:47:09.233 done. 00:47:09.233 00:47:09.233 real 0m1.276s 00:47:09.233 user 0m4.091s 00:47:09.233 sys 0m0.064s 00:47:09.233 10:42:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:09.233 10:42:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:47:09.233 ************************************ 00:47:09.233 END TEST event_perf 00:47:09.233 ************************************ 00:47:09.233 10:42:17 event -- common/autotest_common.sh@1142 -- # return 0 00:47:09.233 10:42:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:47:09.233 10:42:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:47:09.233 10:42:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:09.233 10:42:17 event -- common/autotest_common.sh@10 -- # set +x 00:47:09.233 ************************************ 00:47:09.233 START TEST event_reactor 00:47:09.233 ************************************ 00:47:09.233 10:42:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:47:09.491 [2024-07-22 10:42:17.186363] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:09.491 [2024-07-22 10:42:17.186454] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75156 ] 00:47:09.491 [2024-07-22 10:42:17.306624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:09.491 [2024-07-22 10:42:17.330343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:09.491 [2024-07-22 10:42:17.384235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:10.871 test_start 00:47:10.871 oneshot 00:47:10.871 tick 100 00:47:10.871 tick 100 00:47:10.871 tick 250 00:47:10.871 tick 100 00:47:10.871 tick 100 00:47:10.871 tick 100 00:47:10.871 tick 250 00:47:10.871 tick 500 00:47:10.871 tick 100 00:47:10.871 tick 100 00:47:10.871 tick 250 00:47:10.871 tick 100 00:47:10.871 tick 100 00:47:10.871 test_end 00:47:10.871 00:47:10.871 real 0m1.279s 00:47:10.871 user 0m1.113s 00:47:10.871 sys 0m0.061s 00:47:10.871 10:42:18 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:10.871 10:42:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 ************************************ 00:47:10.871 END TEST event_reactor 00:47:10.871 ************************************ 00:47:10.871 10:42:18 event -- common/autotest_common.sh@1142 -- # return 0 00:47:10.871 10:42:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:47:10.871 10:42:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:47:10.871 10:42:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:10.871 10:42:18 event -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 ************************************ 00:47:10.871 START TEST event_reactor_perf 00:47:10.871 ************************************ 00:47:10.871 10:42:18 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:47:10.871 [2024-07-22 10:42:18.536543] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:10.871 [2024-07-22 10:42:18.536646] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75191 ] 00:47:10.871 [2024-07-22 10:42:18.657326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:10.871 [2024-07-22 10:42:18.682565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:10.871 [2024-07-22 10:42:18.736442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:12.251 test_start 00:47:12.251 test_end 00:47:12.251 Performance: 505683 events per second 00:47:12.251 00:47:12.251 real 0m1.283s 00:47:12.251 user 0m1.116s 00:47:12.251 sys 0m0.061s 00:47:12.251 10:42:19 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:12.251 10:42:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:47:12.251 ************************************ 00:47:12.251 END TEST event_reactor_perf 00:47:12.251 ************************************ 00:47:12.251 10:42:19 event -- common/autotest_common.sh@1142 -- # return 0 00:47:12.251 10:42:19 event -- event/event.sh@49 -- # uname -s 00:47:12.251 10:42:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:47:12.251 10:42:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:47:12.251 10:42:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:12.251 10:42:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:12.251 10:42:19 event -- common/autotest_common.sh@10 -- # set +x 00:47:12.251 ************************************ 00:47:12.251 START TEST event_scheduler 00:47:12.251 ************************************ 00:47:12.251 10:42:19 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:47:12.251 * Looking for test storage... 00:47:12.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:47:12.251 10:42:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:47:12.251 10:42:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75253 00:47:12.251 10:42:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:47:12.251 10:42:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:47:12.251 10:42:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75253 00:47:12.251 10:42:20 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 75253 ']' 00:47:12.251 10:42:20 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:12.251 10:42:20 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:12.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:12.251 10:42:20 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:12.251 10:42:20 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:12.251 10:42:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:47:12.251 [2024-07-22 10:42:20.050990] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:12.251 [2024-07-22 10:42:20.051570] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75253 ] 00:47:12.251 [2024-07-22 10:42:20.170880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:12.510 [2024-07-22 10:42:20.184767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:12.510 [2024-07-22 10:42:20.227488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:12.510 [2024-07-22 10:42:20.227668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:12.510 [2024-07-22 10:42:20.227839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:12.510 [2024-07-22 10:42:20.227843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:47:13.079 10:42:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:47:13.079 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:47:13.079 POWER: Cannot set governor of lcore 0 to userspace 00:47:13.079 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:47:13.079 POWER: Cannot set governor of lcore 0 to performance 00:47:13.079 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:47:13.079 POWER: Cannot set governor of lcore 0 to userspace 00:47:13.079 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:47:13.079 POWER: Cannot set governor of lcore 0 to userspace 00:47:13.079 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:47:13.079 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:47:13.079 POWER: Unable to set Power Management Environment for lcore 0 00:47:13.079 [2024-07-22 10:42:20.900502] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:47:13.079 [2024-07-22 10:42:20.900514] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:47:13.079 [2024-07-22 10:42:20.900522] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:47:13.079 [2024-07-22 10:42:20.900533] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:47:13.079 [2024-07-22 10:42:20.900539] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:47:13.079 [2024-07-22 10:42:20.900546] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.079 10:42:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.079 10:42:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:47:13.080 [2024-07-22 10:42:20.968574] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:47:13.080 10:42:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.080 10:42:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:47:13.080 10:42:20 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:13.080 10:42:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:13.080 10:42:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:47:13.080 ************************************ 00:47:13.080 START TEST scheduler_create_thread 00:47:13.080 ************************************ 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.080 2 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.080 10:42:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.080 3 00:47:13.080 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.080 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:47:13.080 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.080 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 4 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 5 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 6 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 7 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 8 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 9 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.338 10 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.338 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:13.596 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.596 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:47:13.596 10:42:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:47:13.596 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.596 10:42:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:14.529 10:42:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.529 10:42:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:47:14.529 10:42:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.529 10:42:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:15.464 10:42:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.464 10:42:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:47:15.464 10:42:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:47:15.464 10:42:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.464 10:42:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:16.398 ************************************ 00:47:16.398 END TEST scheduler_create_thread 00:47:16.398 ************************************ 00:47:16.398 10:42:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.398 00:47:16.398 real 0m3.217s 00:47:16.398 user 0m0.023s 00:47:16.398 sys 0m0.009s 00:47:16.398 10:42:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:16.398 10:42:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:47:16.398 10:42:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:47:16.398 10:42:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75253 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 75253 ']' 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 75253 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75253 00:47:16.398 killing process with pid 75253 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75253' 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 75253 00:47:16.398 10:42:24 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 75253 00:47:16.656 [2024-07-22 10:42:24.579182] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:47:16.914 ************************************ 00:47:16.914 END TEST event_scheduler 00:47:16.915 ************************************ 00:47:16.915 00:47:16.915 real 0m4.955s 00:47:16.915 user 0m9.948s 00:47:16.915 sys 0m0.392s 00:47:16.915 10:42:24 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:16.915 10:42:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:47:17.173 10:42:24 event -- common/autotest_common.sh@1142 -- # return 0 00:47:17.174 10:42:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:47:17.174 10:42:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:47:17.174 10:42:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:17.174 10:42:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:17.174 10:42:24 event -- common/autotest_common.sh@10 -- # set +x 00:47:17.174 ************************************ 00:47:17.174 START TEST app_repeat 00:47:17.174 ************************************ 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75376 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:47:17.174 Process app_repeat pid: 75376 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75376' 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:47:17.174 spdk_app_start Round 0 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:47:17.174 10:42:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75376 /var/tmp/spdk-nbd.sock 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75376 ']' 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:17.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:17.174 10:42:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:47:17.174 [2024-07-22 10:42:24.943713] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:17.174 [2024-07-22 10:42:24.943786] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75376 ] 00:47:17.174 [2024-07-22 10:42:25.062166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:17.174 [2024-07-22 10:42:25.087196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:17.432 [2024-07-22 10:42:25.129363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:17.432 [2024-07-22 10:42:25.129363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:17.997 10:42:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:17.997 10:42:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:47:17.997 10:42:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:47:18.255 Malloc0 00:47:18.255 10:42:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:47:18.513 Malloc1 00:47:18.513 10:42:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:18.513 10:42:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:47:18.513 /dev/nbd0 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:47:18.772 1+0 records in 00:47:18.772 1+0 records out 00:47:18.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306617 s, 13.4 MB/s 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:47:18.772 /dev/nbd1 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:47:18.772 10:42:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:47:18.772 1+0 records in 00:47:18.772 1+0 records out 00:47:18.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372632 s, 11.0 MB/s 00:47:18.772 10:42:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:19.031 10:42:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:47:19.031 10:42:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:19.031 10:42:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:19.031 10:42:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:19.031 { 00:47:19.031 "bdev_name": "Malloc0", 00:47:19.031 "nbd_device": "/dev/nbd0" 00:47:19.031 }, 00:47:19.031 { 00:47:19.031 "bdev_name": "Malloc1", 00:47:19.031 "nbd_device": "/dev/nbd1" 00:47:19.031 } 00:47:19.031 ]' 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:19.031 { 00:47:19.031 "bdev_name": "Malloc0", 00:47:19.031 "nbd_device": "/dev/nbd0" 00:47:19.031 }, 00:47:19.031 { 00:47:19.031 "bdev_name": "Malloc1", 00:47:19.031 "nbd_device": "/dev/nbd1" 00:47:19.031 } 00:47:19.031 ]' 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:19.031 10:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:47:19.031 /dev/nbd1' 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:47:19.290 /dev/nbd1' 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:47:19.290 256+0 records in 00:47:19.290 256+0 records out 00:47:19.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113251 s, 92.6 MB/s 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:19.290 10:42:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:19.290 256+0 records in 00:47:19.290 256+0 records out 00:47:19.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276251 s, 38.0 MB/s 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:47:19.290 256+0 records in 00:47:19.290 256+0 records out 00:47:19.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291614 s, 36.0 MB/s 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:19.290 10:42:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:19.549 10:42:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:19.808 10:42:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:47:20.067 10:42:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:47:20.067 10:42:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:47:20.067 10:42:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:47:20.067 10:42:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:20.067 10:42:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:47:20.067 10:42:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:47:20.067 10:42:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:47:20.327 [2024-07-22 10:42:28.098577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:20.327 [2024-07-22 10:42:28.134360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:20.327 [2024-07-22 10:42:28.134359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:20.327 [2024-07-22 10:42:28.175342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:47:20.327 [2024-07-22 10:42:28.175391] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:47:23.614 spdk_app_start Round 1 00:47:23.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:23.614 10:42:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:47:23.614 10:42:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:47:23.614 10:42:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75376 /var/tmp/spdk-nbd.sock 00:47:23.614 10:42:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75376 ']' 00:47:23.614 10:42:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:23.614 10:42:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:23.614 10:42:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:23.614 10:42:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:23.614 10:42:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:47:23.614 10:42:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:23.614 10:42:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:47:23.614 10:42:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:47:23.614 Malloc0 00:47:23.614 10:42:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:47:23.614 Malloc1 00:47:23.873 10:42:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:47:23.873 /dev/nbd0 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:47:23.873 1+0 records in 00:47:23.873 1+0 records out 00:47:23.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323022 s, 12.7 MB/s 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:23.873 10:42:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:23.873 10:42:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:47:24.132 /dev/nbd1 00:47:24.132 10:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:47:24.132 10:42:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:47:24.132 1+0 records in 00:47:24.132 1+0 records out 00:47:24.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174604 s, 23.5 MB/s 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:24.132 10:42:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:47:24.132 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:24.132 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:24.132 10:42:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:24.132 10:42:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:24.132 10:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:24.390 { 00:47:24.390 "bdev_name": "Malloc0", 00:47:24.390 "nbd_device": "/dev/nbd0" 00:47:24.390 }, 00:47:24.390 { 00:47:24.390 "bdev_name": "Malloc1", 00:47:24.390 "nbd_device": "/dev/nbd1" 00:47:24.390 } 00:47:24.390 ]' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:24.390 { 00:47:24.390 "bdev_name": "Malloc0", 00:47:24.390 "nbd_device": "/dev/nbd0" 00:47:24.390 }, 00:47:24.390 { 00:47:24.390 "bdev_name": "Malloc1", 00:47:24.390 "nbd_device": "/dev/nbd1" 00:47:24.390 } 00:47:24.390 ]' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:47:24.390 /dev/nbd1' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:47:24.390 /dev/nbd1' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:47:24.390 256+0 records in 00:47:24.390 256+0 records out 00:47:24.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123886 s, 84.6 MB/s 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:24.390 10:42:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:24.649 256+0 records in 00:47:24.649 256+0 records out 00:47:24.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266619 s, 39.3 MB/s 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:47:24.649 256+0 records in 00:47:24.649 256+0 records out 00:47:24.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267624 s, 39.2 MB/s 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:24.649 10:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:24.907 10:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:25.165 10:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:25.165 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:25.165 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:25.165 10:42:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:47:25.165 10:42:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:47:25.423 10:42:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:47:25.680 [2024-07-22 10:42:33.406606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:25.680 [2024-07-22 10:42:33.442889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:25.680 [2024-07-22 10:42:33.442895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:25.680 [2024-07-22 10:42:33.485040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:47:25.680 [2024-07-22 10:42:33.485096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:47:29.009 10:42:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:47:29.009 spdk_app_start Round 2 00:47:29.009 10:42:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:47:29.009 10:42:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75376 /var/tmp/spdk-nbd.sock 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75376 ']' 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:29.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:29.009 10:42:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:47:29.009 10:42:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:47:29.009 Malloc0 00:47:29.009 10:42:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:47:29.009 Malloc1 00:47:29.009 10:42:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:29.009 10:42:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:29.010 10:42:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:29.010 10:42:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:47:29.010 10:42:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:29.010 10:42:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:29.010 10:42:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:47:29.319 /dev/nbd0 00:47:29.319 10:42:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:29.319 10:42:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:47:29.319 1+0 records in 00:47:29.319 1+0 records out 00:47:29.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411108 s, 10.0 MB/s 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:29.319 10:42:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:47:29.319 10:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:29.319 10:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:29.319 10:42:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:47:29.578 /dev/nbd1 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:47:29.578 1+0 records in 00:47:29.578 1+0 records out 00:47:29.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340184 s, 12.0 MB/s 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:29.578 10:42:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:29.578 10:42:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:29.836 { 00:47:29.836 "bdev_name": "Malloc0", 00:47:29.836 "nbd_device": "/dev/nbd0" 00:47:29.836 }, 00:47:29.836 { 00:47:29.836 "bdev_name": "Malloc1", 00:47:29.836 "nbd_device": "/dev/nbd1" 00:47:29.836 } 00:47:29.836 ]' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:29.836 { 00:47:29.836 "bdev_name": "Malloc0", 00:47:29.836 "nbd_device": "/dev/nbd0" 00:47:29.836 }, 00:47:29.836 { 00:47:29.836 "bdev_name": "Malloc1", 00:47:29.836 "nbd_device": "/dev/nbd1" 00:47:29.836 } 00:47:29.836 ]' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:47:29.836 /dev/nbd1' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:47:29.836 /dev/nbd1' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:47:29.836 256+0 records in 00:47:29.836 256+0 records out 00:47:29.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497085 s, 211 MB/s 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:29.836 256+0 records in 00:47:29.836 256+0 records out 00:47:29.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237418 s, 44.2 MB/s 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:47:29.836 256+0 records in 00:47:29.836 256+0 records out 00:47:29.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266905 s, 39.3 MB/s 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:29.836 10:42:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:30.095 10:42:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:30.353 10:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:30.611 10:42:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:47:30.611 10:42:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:47:30.870 10:42:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:47:30.870 [2024-07-22 10:42:38.697064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:30.870 [2024-07-22 10:42:38.731826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:30.870 [2024-07-22 10:42:38.731827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:30.870 [2024-07-22 10:42:38.773275] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:47:30.870 [2024-07-22 10:42:38.773328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:47:34.153 10:42:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75376 /var/tmp/spdk-nbd.sock 00:47:34.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75376 ']' 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:47:34.153 10:42:41 event.app_repeat -- event/event.sh@39 -- # killprocess 75376 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 75376 ']' 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 75376 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75376 00:47:34.153 killing process with pid 75376 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75376' 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@967 -- # kill 75376 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@972 -- # wait 75376 00:47:34.153 spdk_app_start is called in Round 0. 00:47:34.153 Shutdown signal received, stop current app iteration 00:47:34.153 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 reinitialization... 00:47:34.153 spdk_app_start is called in Round 1. 00:47:34.153 Shutdown signal received, stop current app iteration 00:47:34.153 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 reinitialization... 00:47:34.153 spdk_app_start is called in Round 2. 00:47:34.153 Shutdown signal received, stop current app iteration 00:47:34.153 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 reinitialization... 00:47:34.153 spdk_app_start is called in Round 3. 00:47:34.153 Shutdown signal received, stop current app iteration 00:47:34.153 10:42:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:47:34.153 10:42:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:47:34.153 00:47:34.153 real 0m17.056s 00:47:34.153 user 0m37.068s 00:47:34.153 sys 0m3.106s 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:34.153 10:42:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:47:34.153 ************************************ 00:47:34.153 END TEST app_repeat 00:47:34.153 ************************************ 00:47:34.153 10:42:42 event -- common/autotest_common.sh@1142 -- # return 0 00:47:34.153 10:42:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:47:34.153 10:42:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:47:34.153 10:42:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:34.153 10:42:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:34.153 10:42:42 event -- common/autotest_common.sh@10 -- # set +x 00:47:34.153 ************************************ 00:47:34.153 START TEST cpu_locks 00:47:34.153 ************************************ 00:47:34.153 10:42:42 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:47:34.412 * Looking for test storage... 00:47:34.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:47:34.412 10:42:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:47:34.412 10:42:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:47:34.412 10:42:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:47:34.412 10:42:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:47:34.412 10:42:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:34.412 10:42:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:34.412 10:42:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:34.412 ************************************ 00:47:34.412 START TEST default_locks 00:47:34.412 ************************************ 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75974 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75974 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75974 ']' 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:34.412 10:42:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:47:34.412 [2024-07-22 10:42:42.229817] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:34.412 [2024-07-22 10:42:42.229882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75974 ] 00:47:34.670 [2024-07-22 10:42:42.346834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:34.670 [2024-07-22 10:42:42.370501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:34.670 [2024-07-22 10:42:42.410359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:35.237 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:35.237 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:47:35.237 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75974 00:47:35.237 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75974 00:47:35.237 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75974 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 75974 ']' 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 75974 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75974 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:35.494 killing process with pid 75974 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75974' 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 75974 00:47:35.494 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 75974 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75974 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75974 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75974 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75974 ']' 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:36.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:47:36.060 ERROR: process (pid: 75974) is no longer running 00:47:36.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75974) - No such process 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:36.060 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:47:36.061 00:47:36.061 real 0m1.519s 00:47:36.061 user 0m1.559s 00:47:36.061 sys 0m0.489s 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:36.061 ************************************ 00:47:36.061 END TEST default_locks 00:47:36.061 ************************************ 00:47:36.061 10:42:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:47:36.061 10:42:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:36.061 10:42:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:47:36.061 10:42:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:36.061 10:42:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:36.061 10:42:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:36.061 ************************************ 00:47:36.061 START TEST default_locks_via_rpc 00:47:36.061 ************************************ 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=76038 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 76038 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76038 ']' 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:36.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:36.061 10:42:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:36.061 [2024-07-22 10:42:43.826540] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:36.061 [2024-07-22 10:42:43.826600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76038 ] 00:47:36.061 [2024-07-22 10:42:43.943439] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:36.061 [2024-07-22 10:42:43.968833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:36.320 [2024-07-22 10:42:44.009302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 76038 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 76038 00:47:36.888 10:42:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 76038 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 76038 ']' 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 76038 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76038 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:37.458 killing process with pid 76038 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76038' 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 76038 00:47:37.458 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 76038 00:47:37.717 00:47:37.717 real 0m1.727s 00:47:37.717 user 0m1.779s 00:47:37.717 sys 0m0.536s 00:47:37.717 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:37.717 ************************************ 00:47:37.717 END TEST default_locks_via_rpc 00:47:37.717 ************************************ 00:47:37.717 10:42:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.717 10:42:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:37.717 10:42:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:47:37.717 10:42:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:37.717 10:42:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:37.717 10:42:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:37.717 ************************************ 00:47:37.717 START TEST non_locking_app_on_locked_coremask 00:47:37.717 ************************************ 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=76103 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 76103 /var/tmp/spdk.sock 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76103 ']' 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:37.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:37.717 10:42:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:37.717 [2024-07-22 10:42:45.626374] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:37.717 [2024-07-22 10:42:45.626450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76103 ] 00:47:37.976 [2024-07-22 10:42:45.743184] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:37.976 [2024-07-22 10:42:45.768159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:37.976 [2024-07-22 10:42:45.809536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=76131 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 76131 /var/tmp/spdk2.sock 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76131 ']' 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:47:38.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:38.542 10:42:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:38.800 [2024-07-22 10:42:46.509521] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:38.800 [2024-07-22 10:42:46.509592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76131 ] 00:47:38.800 [2024-07-22 10:42:46.629800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:38.800 [2024-07-22 10:42:46.646552] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:47:38.800 [2024-07-22 10:42:46.646580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:38.800 [2024-07-22 10:42:46.725410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:39.732 10:42:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:39.732 10:42:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:47:39.732 10:42:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 76103 00:47:39.732 10:42:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76103 00:47:39.732 10:42:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:47:40.297 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 76103 00:47:40.297 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76103 ']' 00:47:40.297 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 76103 00:47:40.297 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76103 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:40.554 killing process with pid 76103 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76103' 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 76103 00:47:40.554 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 76103 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 76131 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76131 ']' 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 76131 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76131 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:41.119 killing process with pid 76131 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76131' 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 76131 00:47:41.119 10:42:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 76131 00:47:41.377 00:47:41.377 real 0m3.618s 00:47:41.377 user 0m3.914s 00:47:41.377 sys 0m1.038s 00:47:41.377 10:42:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:41.377 10:42:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:41.377 ************************************ 00:47:41.377 END TEST non_locking_app_on_locked_coremask 00:47:41.377 ************************************ 00:47:41.377 10:42:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:41.377 10:42:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:47:41.377 10:42:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:41.377 10:42:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:41.377 10:42:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:41.377 ************************************ 00:47:41.377 START TEST locking_app_on_unlocked_coremask 00:47:41.377 ************************************ 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76206 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76206 /var/tmp/spdk.sock 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76206 ']' 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:41.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:41.377 10:42:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:47:41.634 [2024-07-22 10:42:49.316561] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:41.634 [2024-07-22 10:42:49.316624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76206 ] 00:47:41.634 [2024-07-22 10:42:49.433387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:41.634 [2024-07-22 10:42:49.456937] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:47:41.634 [2024-07-22 10:42:49.456975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:41.634 [2024-07-22 10:42:49.497678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76234 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76234 /var/tmp/spdk2.sock 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76234 ']' 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:42.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:47:42.567 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:47:42.568 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:42.568 10:42:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:42.568 [2024-07-22 10:42:50.204389] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:42.568 [2024-07-22 10:42:50.204462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76234 ] 00:47:42.568 [2024-07-22 10:42:50.323805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:42.568 [2024-07-22 10:42:50.340805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:42.568 [2024-07-22 10:42:50.420155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:43.133 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:43.133 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:47:43.133 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76234 00:47:43.133 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76234 00:47:43.133 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76206 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76206 ']' 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 76206 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76206 00:47:44.066 killing process with pid 76206 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76206' 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 76206 00:47:44.066 10:42:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 76206 00:47:44.632 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76234 00:47:44.633 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76234 ']' 00:47:44.633 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 76234 00:47:44.633 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:47:44.633 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:44.633 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76234 00:47:44.890 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:44.890 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:44.890 killing process with pid 76234 00:47:44.890 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76234' 00:47:44.890 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 76234 00:47:44.890 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 76234 00:47:45.148 00:47:45.148 real 0m3.604s 00:47:45.148 user 0m3.874s 00:47:45.148 sys 0m1.060s 00:47:45.148 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:45.148 ************************************ 00:47:45.148 10:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:45.148 END TEST locking_app_on_unlocked_coremask 00:47:45.148 ************************************ 00:47:45.148 10:42:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:45.148 10:42:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:47:45.148 10:42:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:45.149 10:42:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:45.149 10:42:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:45.149 ************************************ 00:47:45.149 START TEST locking_app_on_locked_coremask 00:47:45.149 ************************************ 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76308 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76308 /var/tmp/spdk.sock 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76308 ']' 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:45.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:45.149 10:42:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:45.149 [2024-07-22 10:42:52.996294] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:45.149 [2024-07-22 10:42:52.996353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76308 ] 00:47:45.406 [2024-07-22 10:42:53.112738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:45.406 [2024-07-22 10:42:53.138027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:45.406 [2024-07-22 10:42:53.179097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76336 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76336 /var/tmp/spdk2.sock 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76336 /var/tmp/spdk2.sock 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76336 /var/tmp/spdk2.sock 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 76336 ']' 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:47:45.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:45.972 10:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:45.972 [2024-07-22 10:42:53.856610] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:45.973 [2024-07-22 10:42:53.856810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76336 ] 00:47:46.230 [2024-07-22 10:42:53.976805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:46.230 [2024-07-22 10:42:53.989383] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76308 has claimed it. 00:47:46.230 [2024-07-22 10:42:53.989426] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:47:46.796 ERROR: process (pid: 76336) is no longer running 00:47:46.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (76336) - No such process 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76308 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76308 00:47:46.796 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76308 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 76308 ']' 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 76308 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76308 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76308' 00:47:47.054 killing process with pid 76308 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 76308 00:47:47.054 10:42:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 76308 00:47:47.311 ************************************ 00:47:47.311 END TEST locking_app_on_locked_coremask 00:47:47.311 ************************************ 00:47:47.311 00:47:47.311 real 0m2.277s 00:47:47.311 user 0m2.474s 00:47:47.311 sys 0m0.595s 00:47:47.311 10:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:47.311 10:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:47.569 10:42:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:47.569 10:42:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:47:47.569 10:42:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:47.569 10:42:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:47.569 10:42:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:47.569 ************************************ 00:47:47.569 START TEST locking_overlapped_coremask 00:47:47.569 ************************************ 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76386 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76386 /var/tmp/spdk.sock 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 76386 ']' 00:47:47.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:47.569 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:47.570 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:47.570 10:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:47.570 [2024-07-22 10:42:55.346422] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:47.570 [2024-07-22 10:42:55.346496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76386 ] 00:47:47.570 [2024-07-22 10:42:55.464518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:47.570 [2024-07-22 10:42:55.488752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:47.827 [2024-07-22 10:42:55.530679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:47.827 [2024-07-22 10:42:55.530547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:47.827 [2024-07-22 10:42:55.530677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76412 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76412 /var/tmp/spdk2.sock 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76412 /var/tmp/spdk2.sock 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:47:48.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76412 /var/tmp/spdk2.sock 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 76412 ']' 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:48.392 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:48.392 [2024-07-22 10:42:56.215118] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:48.392 [2024-07-22 10:42:56.215609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76412 ] 00:47:48.649 [2024-07-22 10:42:56.335096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:48.649 [2024-07-22 10:42:56.350965] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76386 has claimed it. 00:47:48.649 [2024-07-22 10:42:56.351007] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:47:49.214 ERROR: process (pid: 76412) is no longer running 00:47:49.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (76412) - No such process 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76386 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 76386 ']' 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 76386 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76386 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76386' 00:47:49.214 killing process with pid 76386 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 76386 00:47:49.214 10:42:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 76386 00:47:49.472 00:47:49.472 real 0m1.934s 00:47:49.472 user 0m5.294s 00:47:49.472 sys 0m0.403s 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:49.472 ************************************ 00:47:49.472 END TEST locking_overlapped_coremask 00:47:49.472 ************************************ 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:47:49.472 10:42:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:49.472 10:42:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:47:49.472 10:42:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:49.472 10:42:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:49.472 10:42:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:49.472 ************************************ 00:47:49.472 START TEST locking_overlapped_coremask_via_rpc 00:47:49.472 ************************************ 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76458 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76458 /var/tmp/spdk.sock 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76458 ']' 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:49.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:49.472 10:42:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.472 [2024-07-22 10:42:57.351005] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:49.472 [2024-07-22 10:42:57.351076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76458 ] 00:47:49.729 [2024-07-22 10:42:57.471201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:49.729 [2024-07-22 10:42:57.480065] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:47:49.729 [2024-07-22 10:42:57.480093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:49.729 [2024-07-22 10:42:57.522578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:49.729 [2024-07-22 10:42:57.522733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:49.729 [2024-07-22 10:42:57.522733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76488 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76488 /var/tmp/spdk2.sock 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76488 ']' 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:47:50.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:50.319 10:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:50.611 [2024-07-22 10:42:58.246262] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:50.611 [2024-07-22 10:42:58.246822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76488 ] 00:47:50.611 [2024-07-22 10:42:58.370808] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:50.611 [2024-07-22 10:42:58.385608] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:47:50.611 [2024-07-22 10:42:58.385632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:50.611 [2024-07-22 10:42:58.469051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:47:50.611 [2024-07-22 10:42:58.472335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:50.611 [2024-07-22 10:42:58.472339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.546 [2024-07-22 10:42:59.138361] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76458 has claimed it. 00:47:51.546 2024/07/22 10:42:59 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:47:51.546 request: 00:47:51.546 { 00:47:51.546 "method": "framework_enable_cpumask_locks", 00:47:51.546 "params": {} 00:47:51.546 } 00:47:51.546 Got JSON-RPC error response 00:47:51.546 GoRPCClient: error on JSON-RPC call 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76458 /var/tmp/spdk.sock 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76458 ']' 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:51.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76488 /var/tmp/spdk2.sock 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76488 ']' 00:47:51.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:51.546 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:47:51.820 00:47:51.820 real 0m2.269s 00:47:51.820 user 0m0.951s 00:47:51.820 sys 0m0.251s 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:51.820 10:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.820 ************************************ 00:47:51.820 END TEST locking_overlapped_coremask_via_rpc 00:47:51.820 ************************************ 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:47:51.820 10:42:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:47:51.820 10:42:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76458 ]] 00:47:51.820 10:42:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76458 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76458 ']' 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76458 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76458 00:47:51.820 killing process with pid 76458 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76458' 00:47:51.820 10:42:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 76458 00:47:51.821 10:42:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 76458 00:47:52.078 10:42:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76488 ]] 00:47:52.079 10:42:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76488 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76488 ']' 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76488 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76488 00:47:52.079 killing process with pid 76488 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76488' 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 76488 00:47:52.079 10:42:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 76488 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:47:52.646 Process with pid 76458 is not found 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76458 ]] 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76458 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76458 ']' 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76458 00:47:52.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (76458) - No such process 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 76458 is not found' 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76488 ]] 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76488 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76488 ']' 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76488 00:47:52.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (76488) - No such process 00:47:52.646 Process with pid 76488 is not found 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 76488 is not found' 00:47:52.646 10:43:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:47:52.646 00:47:52.646 real 0m18.284s 00:47:52.646 user 0m30.822s 00:47:52.646 sys 0m5.253s 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:52.646 ************************************ 00:47:52.646 END TEST cpu_locks 00:47:52.646 ************************************ 00:47:52.646 10:43:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:47:52.646 10:43:00 event -- common/autotest_common.sh@1142 -- # return 0 00:47:52.646 ************************************ 00:47:52.646 END TEST event 00:47:52.646 ************************************ 00:47:52.646 00:47:52.646 real 0m44.703s 00:47:52.646 user 1m24.339s 00:47:52.646 sys 0m9.308s 00:47:52.646 10:43:00 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:52.646 10:43:00 event -- common/autotest_common.sh@10 -- # set +x 00:47:52.646 10:43:00 -- common/autotest_common.sh@1142 -- # return 0 00:47:52.646 10:43:00 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:47:52.646 10:43:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:52.646 10:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:52.646 10:43:00 -- common/autotest_common.sh@10 -- # set +x 00:47:52.646 ************************************ 00:47:52.646 START TEST thread 00:47:52.646 ************************************ 00:47:52.646 10:43:00 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:47:52.646 * Looking for test storage... 00:47:52.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:47:52.646 10:43:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:47:52.646 10:43:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:47:52.646 10:43:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:52.646 10:43:00 thread -- common/autotest_common.sh@10 -- # set +x 00:47:52.905 ************************************ 00:47:52.905 START TEST thread_poller_perf 00:47:52.905 ************************************ 00:47:52.905 10:43:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:47:52.905 [2024-07-22 10:43:00.602889] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:52.905 [2024-07-22 10:43:00.602971] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76629 ] 00:47:52.905 [2024-07-22 10:43:00.723663] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:52.905 [2024-07-22 10:43:00.748125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:52.905 [2024-07-22 10:43:00.788795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.905 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:47:54.321 ====================================== 00:47:54.321 busy:2500212476 (cyc) 00:47:54.321 total_run_count: 431000 00:47:54.321 tsc_hz: 2490000000 (cyc) 00:47:54.321 ====================================== 00:47:54.321 poller_cost: 5800 (cyc), 2329 (nsec) 00:47:54.321 00:47:54.321 real 0m1.273s 00:47:54.321 user 0m1.113s 00:47:54.321 sys 0m0.053s 00:47:54.321 10:43:01 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:54.321 10:43:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:47:54.321 ************************************ 00:47:54.321 END TEST thread_poller_perf 00:47:54.321 ************************************ 00:47:54.321 10:43:01 thread -- common/autotest_common.sh@1142 -- # return 0 00:47:54.321 10:43:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:47:54.321 10:43:01 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:47:54.321 10:43:01 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:54.321 10:43:01 thread -- common/autotest_common.sh@10 -- # set +x 00:47:54.321 ************************************ 00:47:54.321 START TEST thread_poller_perf 00:47:54.321 ************************************ 00:47:54.321 10:43:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:47:54.321 [2024-07-22 10:43:01.946856] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:54.321 [2024-07-22 10:43:01.946967] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76670 ] 00:47:54.321 [2024-07-22 10:43:02.066163] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:54.321 [2024-07-22 10:43:02.090096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:54.321 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:47:54.321 [2024-07-22 10:43:02.128225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:55.262 ====================================== 00:47:55.262 busy:2491963068 (cyc) 00:47:55.262 total_run_count: 5698000 00:47:55.262 tsc_hz: 2490000000 (cyc) 00:47:55.262 ====================================== 00:47:55.262 poller_cost: 437 (cyc), 175 (nsec) 00:47:55.262 00:47:55.262 real 0m1.265s 00:47:55.262 user 0m1.106s 00:47:55.262 sys 0m0.055s 00:47:55.262 10:43:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:55.262 10:43:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:47:55.262 ************************************ 00:47:55.262 END TEST thread_poller_perf 00:47:55.262 ************************************ 00:47:55.520 10:43:03 thread -- common/autotest_common.sh@1142 -- # return 0 00:47:55.520 10:43:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:47:55.520 00:47:55.520 real 0m2.801s 00:47:55.520 user 0m2.304s 00:47:55.520 sys 0m0.287s 00:47:55.520 10:43:03 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:55.520 10:43:03 thread -- common/autotest_common.sh@10 -- # set +x 00:47:55.520 ************************************ 00:47:55.520 END TEST thread 00:47:55.520 ************************************ 00:47:55.520 10:43:03 -- common/autotest_common.sh@1142 -- # return 0 00:47:55.520 10:43:03 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:47:55.520 10:43:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:55.520 10:43:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:55.520 10:43:03 -- common/autotest_common.sh@10 -- # set +x 00:47:55.520 ************************************ 00:47:55.520 START TEST accel 00:47:55.520 ************************************ 00:47:55.520 10:43:03 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:47:55.520 * Looking for test storage... 00:47:55.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:47:55.520 10:43:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:47:55.520 10:43:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:47:55.520 10:43:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:47:55.520 10:43:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76739 00:47:55.520 10:43:03 accel -- accel/accel.sh@63 -- # waitforlisten 76739 00:47:55.778 10:43:03 accel -- common/autotest_common.sh@829 -- # '[' -z 76739 ']' 00:47:55.778 10:43:03 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:55.778 10:43:03 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:55.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:55.778 10:43:03 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:47:55.778 10:43:03 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:55.778 10:43:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:47:55.778 10:43:03 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:55.778 10:43:03 accel -- common/autotest_common.sh@10 -- # set +x 00:47:55.778 10:43:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:55.778 10:43:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:55.778 10:43:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:55.778 10:43:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:55.778 10:43:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:55.778 10:43:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:47:55.778 10:43:03 accel -- accel/accel.sh@41 -- # jq -r . 00:47:55.778 [2024-07-22 10:43:03.507630] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:55.778 [2024-07-22 10:43:03.507716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76739 ] 00:47:55.778 [2024-07-22 10:43:03.624623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:55.778 [2024-07-22 10:43:03.649151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:55.778 [2024-07-22 10:43:03.688784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@862 -- # return 0 00:47:56.713 10:43:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:47:56.713 10:43:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:47:56.713 10:43:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:47:56.713 10:43:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:47:56.713 10:43:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:47:56.713 10:43:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.713 10:43:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@10 -- # set +x 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # IFS== 00:47:56.713 10:43:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:47:56.713 10:43:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:47:56.713 10:43:04 accel -- accel/accel.sh@75 -- # killprocess 76739 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@948 -- # '[' -z 76739 ']' 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@952 -- # kill -0 76739 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@953 -- # uname 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76739 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:56.713 killing process with pid 76739 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76739' 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@967 -- # kill 76739 00:47:56.713 10:43:04 accel -- common/autotest_common.sh@972 -- # wait 76739 00:47:56.971 10:43:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:47:56.971 10:43:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:47:56.971 10:43:04 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:47:56.972 10:43:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:56.972 10:43:04 accel -- common/autotest_common.sh@10 -- # set +x 00:47:56.972 10:43:04 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:47:56.972 10:43:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:47:56.972 10:43:04 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:56.972 10:43:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:47:56.972 10:43:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:47:56.972 10:43:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:47:56.972 10:43:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:47:56.972 10:43:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:56.972 10:43:04 accel -- common/autotest_common.sh@10 -- # set +x 00:47:56.972 ************************************ 00:47:56.972 START TEST accel_missing_filename 00:47:56.972 ************************************ 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:56.972 10:43:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:47:56.972 10:43:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:47:56.972 [2024-07-22 10:43:04.874878] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:56.972 [2024-07-22 10:43:04.874961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76807 ] 00:47:57.230 [2024-07-22 10:43:04.993449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:57.230 [2024-07-22 10:43:05.015479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:57.230 [2024-07-22 10:43:05.054264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.230 [2024-07-22 10:43:05.095196] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:57.230 [2024-07-22 10:43:05.153897] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:47:57.493 A filename is required. 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:57.493 00:47:57.493 real 0m0.377s 00:47:57.493 user 0m0.220s 00:47:57.493 sys 0m0.094s 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:57.493 10:43:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:47:57.493 ************************************ 00:47:57.493 END TEST accel_missing_filename 00:47:57.493 ************************************ 00:47:57.493 10:43:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:47:57.493 10:43:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:47:57.493 10:43:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:47:57.493 10:43:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:57.493 10:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:47:57.493 ************************************ 00:47:57.493 START TEST accel_compress_verify 00:47:57.493 ************************************ 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:57.493 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:47:57.493 10:43:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:47:57.493 [2024-07-22 10:43:05.320541] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:57.493 [2024-07-22 10:43:05.320635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76833 ] 00:47:57.751 [2024-07-22 10:43:05.438919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:57.751 [2024-07-22 10:43:05.461441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:57.751 [2024-07-22 10:43:05.500303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.751 [2024-07-22 10:43:05.541232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:57.751 [2024-07-22 10:43:05.599780] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:47:57.751 00:47:57.751 Compression does not support the verify option, aborting. 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:57.751 00:47:57.751 real 0m0.376s 00:47:57.751 user 0m0.220s 00:47:57.751 sys 0m0.093s 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:57.751 ************************************ 00:47:57.751 10:43:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:47:57.751 END TEST accel_compress_verify 00:47:57.751 ************************************ 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:47:58.010 10:43:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:47:58.010 ************************************ 00:47:58.010 START TEST accel_wrong_workload 00:47:58.010 ************************************ 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:47:58.010 10:43:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:47:58.010 Unsupported workload type: foobar 00:47:58.010 [2024-07-22 10:43:05.766450] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:47:58.010 accel_perf options: 00:47:58.010 [-h help message] 00:47:58.010 [-q queue depth per core] 00:47:58.010 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:47:58.010 [-T number of threads per core 00:47:58.010 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:47:58.010 [-t time in seconds] 00:47:58.010 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:47:58.010 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:47:58.010 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:47:58.010 [-l for compress/decompress workloads, name of uncompressed input file 00:47:58.010 [-S for crc32c workload, use this seed value (default 0) 00:47:58.010 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:47:58.010 [-f for fill workload, use this BYTE value (default 255) 00:47:58.010 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:47:58.010 [-y verify result if this switch is on] 00:47:58.010 [-a tasks to allocate per core (default: same value as -q)] 00:47:58.010 Can be used to spread operations across a wider range of memory. 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:58.010 00:47:58.010 real 0m0.038s 00:47:58.010 user 0m0.020s 00:47:58.010 sys 0m0.018s 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:58.010 10:43:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:47:58.010 ************************************ 00:47:58.010 END TEST accel_wrong_workload 00:47:58.010 ************************************ 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:47:58.010 10:43:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:47:58.010 ************************************ 00:47:58.010 START TEST accel_negative_buffers 00:47:58.010 ************************************ 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:47:58.010 10:43:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:47:58.010 -x option must be non-negative. 00:47:58.010 [2024-07-22 10:43:05.874757] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:47:58.010 accel_perf options: 00:47:58.010 [-h help message] 00:47:58.010 [-q queue depth per core] 00:47:58.010 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:47:58.010 [-T number of threads per core 00:47:58.010 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:47:58.010 [-t time in seconds] 00:47:58.010 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:47:58.010 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:47:58.010 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:47:58.010 [-l for compress/decompress workloads, name of uncompressed input file 00:47:58.010 [-S for crc32c workload, use this seed value (default 0) 00:47:58.010 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:47:58.010 [-f for fill workload, use this BYTE value (default 255) 00:47:58.010 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:47:58.010 [-y verify result if this switch is on] 00:47:58.010 [-a tasks to allocate per core (default: same value as -q)] 00:47:58.010 Can be used to spread operations across a wider range of memory. 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:58.010 00:47:58.010 real 0m0.041s 00:47:58.010 user 0m0.023s 00:47:58.010 sys 0m0.017s 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:58.010 10:43:05 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:47:58.010 ************************************ 00:47:58.010 END TEST accel_negative_buffers 00:47:58.010 ************************************ 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:47:58.010 10:43:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:58.010 10:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:47:58.269 ************************************ 00:47:58.269 START TEST accel_crc32c 00:47:58.269 ************************************ 00:47:58.269 10:43:05 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:47:58.269 10:43:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:47:58.269 [2024-07-22 10:43:05.979252] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:58.269 [2024-07-22 10:43:05.979335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76899 ] 00:47:58.269 [2024-07-22 10:43:06.097704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:58.269 [2024-07-22 10:43:06.120227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:58.269 [2024-07-22 10:43:06.159633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:58.528 10:43:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:47:59.464 10:43:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:47:59.464 00:47:59.464 real 0m1.376s 00:47:59.464 user 0m1.183s 00:47:59.464 sys 0m0.107s 00:47:59.464 10:43:07 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:59.465 ************************************ 00:47:59.465 10:43:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:47:59.465 END TEST accel_crc32c 00:47:59.465 ************************************ 00:47:59.465 10:43:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:47:59.465 10:43:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:47:59.465 10:43:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:47:59.465 10:43:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:59.465 10:43:07 accel -- common/autotest_common.sh@10 -- # set +x 00:47:59.723 ************************************ 00:47:59.723 START TEST accel_crc32c_C2 00:47:59.723 ************************************ 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:47:59.723 [2024-07-22 10:43:07.429471] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:47:59.723 [2024-07-22 10:43:07.429554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76928 ] 00:47:59.723 [2024-07-22 10:43:07.547403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:59.723 [2024-07-22 10:43:07.569876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:59.723 [2024-07-22 10:43:07.608823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.723 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:47:59.981 10:43:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:00.913 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:00.914 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:00.914 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:00.914 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:48:00.914 10:43:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:00.914 00:48:00.914 real 0m1.375s 00:48:00.914 user 0m1.181s 00:48:00.914 sys 0m0.108s 00:48:00.914 10:43:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:00.914 10:43:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:48:00.914 ************************************ 00:48:00.914 END TEST accel_crc32c_C2 00:48:00.914 ************************************ 00:48:00.914 10:43:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:00.914 10:43:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:48:00.914 10:43:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:48:00.914 10:43:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:00.914 10:43:08 accel -- common/autotest_common.sh@10 -- # set +x 00:48:01.172 ************************************ 00:48:01.172 START TEST accel_copy 00:48:01.172 ************************************ 00:48:01.172 10:43:08 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:48:01.172 10:43:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:48:01.172 [2024-07-22 10:43:08.876620] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:01.172 [2024-07-22 10:43:08.876701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76958 ] 00:48:01.172 [2024-07-22 10:43:08.994140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:01.172 [2024-07-22 10:43:09.007316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:01.172 [2024-07-22 10:43:09.046192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:01.172 10:43:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:48:02.567 10:43:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:02.567 00:48:02.567 real 0m1.366s 00:48:02.567 user 0m1.183s 00:48:02.567 sys 0m0.098s 00:48:02.567 10:43:10 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:02.567 10:43:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:48:02.567 ************************************ 00:48:02.567 END TEST accel_copy 00:48:02.567 ************************************ 00:48:02.567 10:43:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:02.567 10:43:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:48:02.567 10:43:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:02.567 10:43:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:02.567 10:43:10 accel -- common/autotest_common.sh@10 -- # set +x 00:48:02.567 ************************************ 00:48:02.567 START TEST accel_fill 00:48:02.567 ************************************ 00:48:02.567 10:43:10 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:48:02.567 10:43:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:48:02.567 [2024-07-22 10:43:10.311110] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:02.567 [2024-07-22 10:43:10.311192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76999 ] 00:48:02.567 [2024-07-22 10:43:10.429546] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:02.567 [2024-07-22 10:43:10.452749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:02.567 [2024-07-22 10:43:10.491379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:02.826 10:43:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:48:03.762 10:43:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:03.762 00:48:03.762 real 0m1.374s 00:48:03.762 user 0m1.189s 00:48:03.762 sys 0m0.098s 00:48:03.762 10:43:11 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:03.762 10:43:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:48:03.762 ************************************ 00:48:03.762 END TEST accel_fill 00:48:03.762 ************************************ 00:48:04.021 10:43:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:04.021 10:43:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:48:04.021 10:43:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:48:04.021 10:43:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:04.021 10:43:11 accel -- common/autotest_common.sh@10 -- # set +x 00:48:04.021 ************************************ 00:48:04.021 START TEST accel_copy_crc32c 00:48:04.021 ************************************ 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:48:04.021 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:48:04.021 [2024-07-22 10:43:11.760179] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:04.021 [2024-07-22 10:43:11.760287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77028 ] 00:48:04.021 [2024-07-22 10:43:11.878163] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:04.021 [2024-07-22 10:43:11.900634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:04.021 [2024-07-22 10:43:11.938255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:04.280 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:04.281 10:43:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:48:05.217 ************************************ 00:48:05.217 END TEST accel_copy_crc32c 00:48:05.217 ************************************ 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:05.217 00:48:05.217 real 0m1.373s 00:48:05.217 user 0m1.192s 00:48:05.217 sys 0m0.094s 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:05.217 10:43:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:48:05.476 10:43:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:05.476 10:43:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:48:05.476 10:43:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:48:05.476 10:43:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:05.476 10:43:13 accel -- common/autotest_common.sh@10 -- # set +x 00:48:05.476 ************************************ 00:48:05.476 START TEST accel_copy_crc32c_C2 00:48:05.476 ************************************ 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:48:05.476 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:48:05.476 [2024-07-22 10:43:13.207876] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:05.476 [2024-07-22 10:43:13.207952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77063 ] 00:48:05.476 [2024-07-22 10:43:13.325194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:05.476 [2024-07-22 10:43:13.348319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:05.476 [2024-07-22 10:43:13.386960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.734 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:05.735 10:43:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:06.668 ************************************ 00:48:06.668 END TEST accel_copy_crc32c_C2 00:48:06.668 ************************************ 00:48:06.668 00:48:06.668 real 0m1.376s 00:48:06.668 user 0m1.182s 00:48:06.668 sys 0m0.106s 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:06.668 10:43:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:48:06.927 10:43:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:06.927 10:43:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:48:06.927 10:43:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:48:06.927 10:43:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:06.927 10:43:14 accel -- common/autotest_common.sh@10 -- # set +x 00:48:06.927 ************************************ 00:48:06.927 START TEST accel_dualcast 00:48:06.927 ************************************ 00:48:06.927 10:43:14 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:48:06.927 10:43:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:48:06.927 [2024-07-22 10:43:14.657413] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:06.927 [2024-07-22 10:43:14.657496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77097 ] 00:48:06.927 [2024-07-22 10:43:14.775397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:06.927 [2024-07-22 10:43:14.798907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:06.927 [2024-07-22 10:43:14.836995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.215 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:07.216 10:43:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:08.160 10:43:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:48:08.160 ************************************ 00:48:08.160 END TEST accel_dualcast 00:48:08.160 ************************************ 00:48:08.160 10:43:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:08.160 00:48:08.160 real 0m1.378s 00:48:08.160 user 0m1.193s 00:48:08.160 sys 0m0.096s 00:48:08.160 10:43:16 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:08.160 10:43:16 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:48:08.160 10:43:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:08.160 10:43:16 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:48:08.160 10:43:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:48:08.160 10:43:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:08.160 10:43:16 accel -- common/autotest_common.sh@10 -- # set +x 00:48:08.160 ************************************ 00:48:08.160 START TEST accel_compare 00:48:08.160 ************************************ 00:48:08.160 10:43:16 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:48:08.160 10:43:16 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:48:08.418 [2024-07-22 10:43:16.109913] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:08.418 [2024-07-22 10:43:16.110012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77126 ] 00:48:08.418 [2024-07-22 10:43:16.228728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:08.418 [2024-07-22 10:43:16.253686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:08.418 [2024-07-22 10:43:16.291722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.418 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:08.676 10:43:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:48:09.609 10:43:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:09.609 00:48:09.609 real 0m1.379s 00:48:09.609 user 0m1.186s 00:48:09.610 sys 0m0.105s 00:48:09.610 ************************************ 00:48:09.610 END TEST accel_compare 00:48:09.610 10:43:17 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:09.610 10:43:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:48:09.610 ************************************ 00:48:09.610 10:43:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:09.610 10:43:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:48:09.610 10:43:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:48:09.610 10:43:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:09.610 10:43:17 accel -- common/autotest_common.sh@10 -- # set +x 00:48:09.610 ************************************ 00:48:09.610 START TEST accel_xor 00:48:09.610 ************************************ 00:48:09.610 10:43:17 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:48:09.610 10:43:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:48:09.868 [2024-07-22 10:43:17.558950] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:09.868 [2024-07-22 10:43:17.559036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77161 ] 00:48:09.868 [2024-07-22 10:43:17.677613] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:09.868 [2024-07-22 10:43:17.702897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:09.868 [2024-07-22 10:43:17.741229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.868 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:09.869 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:10.127 10:43:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:11.062 00:48:11.062 real 0m1.380s 00:48:11.062 user 0m1.191s 00:48:11.062 sys 0m0.101s 00:48:11.062 ************************************ 00:48:11.062 END TEST accel_xor 00:48:11.062 ************************************ 00:48:11.062 10:43:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:11.062 10:43:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:48:11.062 10:43:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:11.062 10:43:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:48:11.062 10:43:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:48:11.062 10:43:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:11.062 10:43:18 accel -- common/autotest_common.sh@10 -- # set +x 00:48:11.062 ************************************ 00:48:11.062 START TEST accel_xor 00:48:11.062 ************************************ 00:48:11.062 10:43:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:48:11.062 10:43:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:48:11.321 [2024-07-22 10:43:19.012552] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:11.321 [2024-07-22 10:43:19.012635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77195 ] 00:48:11.321 [2024-07-22 10:43:19.131376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:11.321 [2024-07-22 10:43:19.155726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:11.321 [2024-07-22 10:43:19.203169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.579 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:11.580 10:43:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:48:12.516 10:43:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:12.516 00:48:12.516 real 0m1.396s 00:48:12.516 user 0m1.197s 00:48:12.516 sys 0m0.110s 00:48:12.516 10:43:20 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:12.516 10:43:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:48:12.516 ************************************ 00:48:12.516 END TEST accel_xor 00:48:12.516 ************************************ 00:48:12.516 10:43:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:12.516 10:43:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:48:12.516 10:43:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:48:12.516 10:43:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:12.516 10:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:48:12.774 ************************************ 00:48:12.774 START TEST accel_dif_verify 00:48:12.774 ************************************ 00:48:12.774 10:43:20 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:48:12.774 [2024-07-22 10:43:20.483316] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:12.774 [2024-07-22 10:43:20.483519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77230 ] 00:48:12.774 [2024-07-22 10:43:20.601087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:12.774 [2024-07-22 10:43:20.623785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:12.774 [2024-07-22 10:43:20.661404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:12.774 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.033 10:43:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:48:13.970 10:43:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:13.970 00:48:13.970 real 0m1.376s 00:48:13.970 user 0m1.186s 00:48:13.970 sys 0m0.103s 00:48:13.970 10:43:21 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:13.970 10:43:21 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:48:13.970 ************************************ 00:48:13.970 END TEST accel_dif_verify 00:48:13.970 ************************************ 00:48:13.970 10:43:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:13.970 10:43:21 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:48:13.970 10:43:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:48:13.970 10:43:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:13.970 10:43:21 accel -- common/autotest_common.sh@10 -- # set +x 00:48:13.970 ************************************ 00:48:13.970 START TEST accel_dif_generate 00:48:13.970 ************************************ 00:48:13.970 10:43:21 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:48:13.970 10:43:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:48:13.970 10:43:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:48:13.970 10:43:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:13.970 10:43:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:13.970 10:43:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:48:14.229 10:43:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:48:14.229 [2024-07-22 10:43:21.926853] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:14.229 [2024-07-22 10:43:21.926919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77264 ] 00:48:14.229 [2024-07-22 10:43:22.044871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:14.229 [2024-07-22 10:43:22.068655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:14.229 [2024-07-22 10:43:22.106735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.229 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:14.488 10:43:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:48:15.424 10:43:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:15.424 00:48:15.424 real 0m1.377s 00:48:15.424 user 0m1.185s 00:48:15.424 sys 0m0.104s 00:48:15.424 ************************************ 00:48:15.424 END TEST accel_dif_generate 00:48:15.424 ************************************ 00:48:15.424 10:43:23 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:15.424 10:43:23 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:48:15.424 10:43:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:15.424 10:43:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:48:15.424 10:43:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:48:15.424 10:43:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:15.424 10:43:23 accel -- common/autotest_common.sh@10 -- # set +x 00:48:15.424 ************************************ 00:48:15.424 START TEST accel_dif_generate_copy 00:48:15.424 ************************************ 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:48:15.424 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:48:15.682 [2024-07-22 10:43:23.376589] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:15.682 [2024-07-22 10:43:23.376795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77293 ] 00:48:15.682 [2024-07-22 10:43:23.494536] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:15.682 [2024-07-22 10:43:23.516643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:15.682 [2024-07-22 10:43:23.554281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.682 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:15.683 10:43:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:17.057 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:48:17.058 ************************************ 00:48:17.058 END TEST accel_dif_generate_copy 00:48:17.058 ************************************ 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:17.058 00:48:17.058 real 0m1.374s 00:48:17.058 user 0m1.188s 00:48:17.058 sys 0m0.098s 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:17.058 10:43:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:48:17.058 10:43:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:17.058 10:43:24 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:48:17.058 10:43:24 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:17.058 10:43:24 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:48:17.058 10:43:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:17.058 10:43:24 accel -- common/autotest_common.sh@10 -- # set +x 00:48:17.058 ************************************ 00:48:17.058 START TEST accel_comp 00:48:17.058 ************************************ 00:48:17.058 10:43:24 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:48:17.058 10:43:24 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:48:17.058 [2024-07-22 10:43:24.824807] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:17.058 [2024-07-22 10:43:24.825020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77328 ] 00:48:17.058 [2024-07-22 10:43:24.942980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:17.058 [2024-07-22 10:43:24.966874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:17.317 [2024-07-22 10:43:25.006240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:17.317 10:43:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:18.254 ************************************ 00:48:18.254 END TEST accel_comp 00:48:18.254 ************************************ 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:48:18.254 10:43:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:18.254 00:48:18.254 real 0m1.383s 00:48:18.254 user 0m1.188s 00:48:18.254 sys 0m0.107s 00:48:18.254 10:43:26 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:18.254 10:43:26 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:48:18.542 10:43:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:18.542 10:43:26 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:48:18.542 10:43:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:48:18.542 10:43:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:18.542 10:43:26 accel -- common/autotest_common.sh@10 -- # set +x 00:48:18.542 ************************************ 00:48:18.542 START TEST accel_decomp 00:48:18.542 ************************************ 00:48:18.542 10:43:26 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:48:18.542 10:43:26 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:48:18.542 [2024-07-22 10:43:26.280635] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:18.542 [2024-07-22 10:43:26.280717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77362 ] 00:48:18.542 [2024-07-22 10:43:26.399159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:18.542 [2024-07-22 10:43:26.422559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:18.856 [2024-07-22 10:43:26.462013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:18.856 10:43:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:48:19.794 10:43:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:19.794 00:48:19.794 real 0m1.382s 00:48:19.794 user 0m1.203s 00:48:19.794 sys 0m0.094s 00:48:19.794 10:43:27 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:19.794 10:43:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:48:19.795 ************************************ 00:48:19.795 END TEST accel_decomp 00:48:19.795 ************************************ 00:48:19.795 10:43:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:19.795 10:43:27 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:48:19.795 10:43:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:48:19.795 10:43:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:19.795 10:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:48:19.795 ************************************ 00:48:19.795 START TEST accel_decomp_full 00:48:19.795 ************************************ 00:48:19.795 10:43:27 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:48:19.795 10:43:27 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:48:20.053 [2024-07-22 10:43:27.730503] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:20.053 [2024-07-22 10:43:27.730583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77397 ] 00:48:20.053 [2024-07-22 10:43:27.848436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:20.053 [2024-07-22 10:43:27.870680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:20.053 [2024-07-22 10:43:27.909971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:20.053 10:43:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:20.054 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:20.054 10:43:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:48:21.430 10:43:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:21.430 00:48:21.430 real 0m1.388s 00:48:21.430 user 0m1.205s 00:48:21.430 sys 0m0.099s 00:48:21.430 10:43:29 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:21.430 10:43:29 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:48:21.430 ************************************ 00:48:21.430 END TEST accel_decomp_full 00:48:21.430 ************************************ 00:48:21.430 10:43:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:21.430 10:43:29 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:48:21.430 10:43:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:48:21.430 10:43:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:21.430 10:43:29 accel -- common/autotest_common.sh@10 -- # set +x 00:48:21.430 ************************************ 00:48:21.430 START TEST accel_decomp_mcore 00:48:21.430 ************************************ 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:48:21.430 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:48:21.430 [2024-07-22 10:43:29.188546] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:21.431 [2024-07-22 10:43:29.188628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77426 ] 00:48:21.431 [2024-07-22 10:43:29.307537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:21.431 [2024-07-22 10:43:29.331056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:21.690 [2024-07-22 10:43:29.373249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:21.690 [2024-07-22 10:43:29.373360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:48:21.690 [2024-07-22 10:43:29.373546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:21.690 [2024-07-22 10:43:29.373545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:21.690 10:43:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:22.626 00:48:22.626 real 0m1.389s 00:48:22.626 user 0m0.021s 00:48:22.626 sys 0m0.003s 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:22.626 10:43:30 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:48:22.626 ************************************ 00:48:22.626 END TEST accel_decomp_mcore 00:48:22.626 ************************************ 00:48:22.884 10:43:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:22.884 10:43:30 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:48:22.884 10:43:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:22.884 10:43:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:22.884 10:43:30 accel -- common/autotest_common.sh@10 -- # set +x 00:48:22.884 ************************************ 00:48:22.884 START TEST accel_decomp_full_mcore 00:48:22.884 ************************************ 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:48:22.884 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:48:22.884 [2024-07-22 10:43:30.645133] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:22.884 [2024-07-22 10:43:30.645217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77468 ] 00:48:22.884 [2024-07-22 10:43:30.764497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:22.885 [2024-07-22 10:43:30.787144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:23.144 [2024-07-22 10:43:30.829061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:23.144 [2024-07-22 10:43:30.829183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:48:23.144 [2024-07-22 10:43:30.829337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:48:23.144 [2024-07-22 10:43:30.829335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:23.144 10:43:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.078 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.078 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.078 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.078 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.078 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:24.337 00:48:24.337 real 0m1.405s 00:48:24.337 user 0m4.535s 00:48:24.337 sys 0m0.120s 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:24.337 10:43:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:48:24.337 ************************************ 00:48:24.337 END TEST accel_decomp_full_mcore 00:48:24.337 ************************************ 00:48:24.337 10:43:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:24.337 10:43:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:48:24.337 10:43:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:48:24.337 10:43:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:24.337 10:43:32 accel -- common/autotest_common.sh@10 -- # set +x 00:48:24.337 ************************************ 00:48:24.337 START TEST accel_decomp_mthread 00:48:24.337 ************************************ 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:48:24.337 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:48:24.337 [2024-07-22 10:43:32.116622] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:24.337 [2024-07-22 10:43:32.116703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77501 ] 00:48:24.337 [2024-07-22 10:43:32.234222] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:24.337 [2024-07-22 10:43:32.258735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:24.595 [2024-07-22 10:43:32.297033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:24.595 10:43:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.968 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:25.969 00:48:25.969 real 0m1.381s 00:48:25.969 user 0m1.195s 00:48:25.969 sys 0m0.102s 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:25.969 10:43:33 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:48:25.969 ************************************ 00:48:25.969 END TEST accel_decomp_mthread 00:48:25.969 ************************************ 00:48:25.969 10:43:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:25.969 10:43:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:48:25.969 10:43:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:25.969 10:43:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:25.969 10:43:33 accel -- common/autotest_common.sh@10 -- # set +x 00:48:25.969 ************************************ 00:48:25.969 START TEST accel_decomp_full_mthread 00:48:25.969 ************************************ 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:48:25.969 [2024-07-22 10:43:33.572034] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:25.969 [2024-07-22 10:43:33.572117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77535 ] 00:48:25.969 [2024-07-22 10:43:33.689980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:25.969 [2024-07-22 10:43:33.712490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:25.969 [2024-07-22 10:43:33.751604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:25.969 10:43:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:27.342 00:48:27.342 real 0m1.406s 00:48:27.342 user 0m1.219s 00:48:27.342 sys 0m0.102s 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:27.342 10:43:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:48:27.342 ************************************ 00:48:27.342 END TEST accel_decomp_full_mthread 00:48:27.342 ************************************ 00:48:27.342 10:43:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:27.342 10:43:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:48:27.342 10:43:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:48:27.342 10:43:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:48:27.342 10:43:35 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:27.342 10:43:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:48:27.342 10:43:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:27.342 10:43:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:48:27.342 10:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:48:27.342 10:43:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:48:27.342 10:43:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:48:27.342 10:43:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:48:27.342 10:43:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:48:27.342 10:43:35 accel -- accel/accel.sh@41 -- # jq -r . 00:48:27.342 ************************************ 00:48:27.342 START TEST accel_dif_functional_tests 00:48:27.342 ************************************ 00:48:27.342 10:43:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:48:27.342 [2024-07-22 10:43:35.075955] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:27.342 [2024-07-22 10:43:35.076022] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77571 ] 00:48:27.342 [2024-07-22 10:43:35.193727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:27.342 [2024-07-22 10:43:35.218731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:48:27.342 [2024-07-22 10:43:35.263259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:27.342 [2024-07-22 10:43:35.263445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:27.342 [2024-07-22 10:43:35.263446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:48:27.600 00:48:27.600 00:48:27.600 CUnit - A unit testing framework for C - Version 2.1-3 00:48:27.600 http://cunit.sourceforge.net/ 00:48:27.600 00:48:27.600 00:48:27.600 Suite: accel_dif 00:48:27.600 Test: verify: DIF generated, GUARD check ...passed 00:48:27.600 Test: verify: DIF generated, APPTAG check ...passed 00:48:27.600 Test: verify: DIF generated, REFTAG check ...passed 00:48:27.600 Test: verify: DIF not generated, GUARD check ...passed 00:48:27.600 Test: verify: DIF not generated, APPTAG check ...passed 00:48:27.600 Test: verify: DIF not generated, REFTAG check ...passed 00:48:27.600 Test: verify: APPTAG correct, APPTAG check ...passed 00:48:27.600 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:48:27.600 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:48:27.600 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:48:27.600 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:48:27.600 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:48:27.600 Test: verify copy: DIF generated, GUARD check ...passed 00:48:27.600 Test: verify copy: DIF generated, APPTAG check ...[2024-07-22 10:43:35.328235] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:48:27.600 [2024-07-22 10:43:35.328300] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:48:27.600 [2024-07-22 10:43:35.328323] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:48:27.600 [2024-07-22 10:43:35.328376] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:48:27.600 [2024-07-22 10:43:35.328480] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:48:27.600 passed 00:48:27.600 Test: verify copy: DIF generated, REFTAG check ...passed 00:48:27.600 Test: verify copy: DIF not generated, GUARD check ...passed 00:48:27.600 Test: verify copy: DIF not generated, APPTAG check ...passed 00:48:27.600 Test: verify copy: DIF not generated, REFTAG check ...passed 00:48:27.600 Test: generate copy: DIF generated, GUARD check ...passed 00:48:27.600 Test: generate copy: DIF generated, APTTAG check ...passed 00:48:27.600 Test: generate copy: DIF generated, REFTAG check ...passed 00:48:27.600 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:48:27.600 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:48:27.600 Test: generate copy: DIF generated, no REFTAG check flag set ...passed[2024-07-22 10:43:35.328599] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:48:27.600 [2024-07-22 10:43:35.328623] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:48:27.600 [2024-07-22 10:43:35.328648] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:48:27.600 00:48:27.600 Test: generate copy: iovecs-len validate ...passed 00:48:27.600 Test: generate copy: buffer alignment validate ...passed 00:48:27.600 00:48:27.600 Run Summary: Type Total Ran Passed Failed Inactive 00:48:27.600 suites 1 1 n/a 0 0 00:48:27.600 tests 26 26 26 0 0 00:48:27.600 asserts 115 115 115 0 n/a 00:48:27.600 00:48:27.600 Elapsed time = 0.002 seconds 00:48:27.600 [2024-07-22 10:43:35.328826] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:48:27.600 00:48:27.601 real 0m0.466s 00:48:27.601 user 0m0.571s 00:48:27.601 sys 0m0.135s 00:48:27.601 10:43:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:27.601 10:43:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:48:27.601 ************************************ 00:48:27.601 END TEST accel_dif_functional_tests 00:48:27.601 ************************************ 00:48:27.859 10:43:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:48:27.859 00:48:27.859 real 0m32.229s 00:48:27.859 user 0m33.522s 00:48:27.859 sys 0m4.055s 00:48:27.859 10:43:35 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:27.859 10:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:48:27.859 ************************************ 00:48:27.859 END TEST accel 00:48:27.859 ************************************ 00:48:27.859 10:43:35 -- common/autotest_common.sh@1142 -- # return 0 00:48:27.859 10:43:35 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:48:27.859 10:43:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:48:27.859 10:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:27.859 10:43:35 -- common/autotest_common.sh@10 -- # set +x 00:48:27.859 ************************************ 00:48:27.859 START TEST accel_rpc 00:48:27.859 ************************************ 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:48:27.859 * Looking for test storage... 00:48:27.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:48:27.859 10:43:35 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:48:27.859 10:43:35 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77637 00:48:27.859 10:43:35 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:48:27.859 10:43:35 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77637 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 77637 ']' 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:27.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:27.859 10:43:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:48:28.117 [2024-07-22 10:43:35.799174] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:28.117 [2024-07-22 10:43:35.799237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77637 ] 00:48:28.117 [2024-07-22 10:43:35.917388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:28.117 [2024-07-22 10:43:35.939662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:28.117 [2024-07-22 10:43:35.979763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:48:29.052 10:43:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:48:29.052 10:43:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:48:29.052 10:43:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:48:29.052 10:43:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:48:29.052 10:43:36 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:48:29.052 ************************************ 00:48:29.052 START TEST accel_assign_opcode 00:48:29.052 ************************************ 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:48:29.052 [2024-07-22 10:43:36.659105] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:48:29.052 [2024-07-22 10:43:36.671101] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:29.052 software 00:48:29.052 00:48:29.052 real 0m0.240s 00:48:29.052 user 0m0.052s 00:48:29.052 sys 0m0.014s 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:29.052 10:43:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:48:29.052 ************************************ 00:48:29.052 END TEST accel_assign_opcode 00:48:29.052 ************************************ 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:48:29.052 10:43:36 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77637 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 77637 ']' 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 77637 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:29.052 10:43:36 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77637 00:48:29.311 killing process with pid 77637 00:48:29.311 10:43:36 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:29.311 10:43:36 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:29.311 10:43:36 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77637' 00:48:29.311 10:43:36 accel_rpc -- common/autotest_common.sh@967 -- # kill 77637 00:48:29.311 10:43:36 accel_rpc -- common/autotest_common.sh@972 -- # wait 77637 00:48:29.570 00:48:29.570 real 0m1.665s 00:48:29.570 user 0m1.647s 00:48:29.570 sys 0m0.461s 00:48:29.570 10:43:37 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:29.570 ************************************ 00:48:29.570 END TEST accel_rpc 00:48:29.570 ************************************ 00:48:29.570 10:43:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:48:29.570 10:43:37 -- common/autotest_common.sh@1142 -- # return 0 00:48:29.570 10:43:37 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:48:29.570 10:43:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:48:29.570 10:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:29.570 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:48:29.570 ************************************ 00:48:29.570 START TEST app_cmdline 00:48:29.570 ************************************ 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:48:29.570 * Looking for test storage... 00:48:29.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:48:29.570 10:43:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:48:29.570 10:43:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77741 00:48:29.570 10:43:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:48:29.570 10:43:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77741 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 77741 ']' 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:29.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:29.570 10:43:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:48:29.829 [2024-07-22 10:43:37.540884] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:29.829 [2024-07-22 10:43:37.540949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77741 ] 00:48:29.829 [2024-07-22 10:43:37.658080] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:29.829 [2024-07-22 10:43:37.683760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:29.829 [2024-07-22 10:43:37.724403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:48:30.773 { 00:48:30.773 "fields": { 00:48:30.773 "commit": "8fb860b73", 00:48:30.773 "major": 24, 00:48:30.773 "minor": 9, 00:48:30.773 "patch": 0, 00:48:30.773 "suffix": "-pre" 00:48:30.773 }, 00:48:30.773 "version": "SPDK v24.09-pre git sha1 8fb860b73" 00:48:30.773 } 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:48:30.773 10:43:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:48:30.773 10:43:38 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:48:31.031 2024/07/22 10:43:38 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:48:31.031 request: 00:48:31.031 { 00:48:31.031 "method": "env_dpdk_get_mem_stats", 00:48:31.031 "params": {} 00:48:31.031 } 00:48:31.031 Got JSON-RPC error response 00:48:31.031 GoRPCClient: error on JSON-RPC call 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:48:31.031 10:43:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77741 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 77741 ']' 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 77741 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77741 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:31.031 killing process with pid 77741 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77741' 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@967 -- # kill 77741 00:48:31.031 10:43:38 app_cmdline -- common/autotest_common.sh@972 -- # wait 77741 00:48:31.288 00:48:31.288 real 0m1.774s 00:48:31.288 user 0m2.016s 00:48:31.288 sys 0m0.490s 00:48:31.288 10:43:39 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:31.288 10:43:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:48:31.288 ************************************ 00:48:31.288 END TEST app_cmdline 00:48:31.288 ************************************ 00:48:31.288 10:43:39 -- common/autotest_common.sh@1142 -- # return 0 00:48:31.288 10:43:39 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:48:31.288 10:43:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:48:31.288 10:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:31.288 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:48:31.288 ************************************ 00:48:31.288 START TEST version 00:48:31.288 ************************************ 00:48:31.288 10:43:39 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:48:31.545 * Looking for test storage... 00:48:31.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:48:31.545 10:43:39 version -- app/version.sh@17 -- # get_header_version major 00:48:31.545 10:43:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # cut -f2 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # tr -d '"' 00:48:31.545 10:43:39 version -- app/version.sh@17 -- # major=24 00:48:31.545 10:43:39 version -- app/version.sh@18 -- # get_header_version minor 00:48:31.545 10:43:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # cut -f2 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # tr -d '"' 00:48:31.545 10:43:39 version -- app/version.sh@18 -- # minor=9 00:48:31.545 10:43:39 version -- app/version.sh@19 -- # get_header_version patch 00:48:31.545 10:43:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # cut -f2 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # tr -d '"' 00:48:31.545 10:43:39 version -- app/version.sh@19 -- # patch=0 00:48:31.545 10:43:39 version -- app/version.sh@20 -- # get_header_version suffix 00:48:31.545 10:43:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # cut -f2 00:48:31.545 10:43:39 version -- app/version.sh@14 -- # tr -d '"' 00:48:31.545 10:43:39 version -- app/version.sh@20 -- # suffix=-pre 00:48:31.545 10:43:39 version -- app/version.sh@22 -- # version=24.9 00:48:31.545 10:43:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:48:31.545 10:43:39 version -- app/version.sh@28 -- # version=24.9rc0 00:48:31.545 10:43:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:48:31.545 10:43:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:48:31.545 10:43:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:48:31.545 10:43:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:48:31.545 00:48:31.545 real 0m0.219s 00:48:31.545 user 0m0.104s 00:48:31.545 sys 0m0.170s 00:48:31.545 10:43:39 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:31.545 10:43:39 version -- common/autotest_common.sh@10 -- # set +x 00:48:31.545 ************************************ 00:48:31.545 END TEST version 00:48:31.545 ************************************ 00:48:31.803 10:43:39 -- common/autotest_common.sh@1142 -- # return 0 00:48:31.803 10:43:39 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@198 -- # uname -s 00:48:31.803 10:43:39 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:48:31.803 10:43:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:48:31.803 10:43:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:48:31.803 10:43:39 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:48:31.803 10:43:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:31.803 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:48:31.803 10:43:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:48:31.803 10:43:39 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:48:31.803 10:43:39 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:48:31.803 10:43:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:31.803 10:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:31.803 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:48:31.803 ************************************ 00:48:31.803 START TEST nvmf_tcp 00:48:31.803 ************************************ 00:48:31.803 10:43:39 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:48:31.803 * Looking for test storage... 00:48:31.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:31.803 10:43:39 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:31.803 10:43:39 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:31.803 10:43:39 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:31.803 10:43:39 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.803 10:43:39 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.803 10:43:39 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.803 10:43:39 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:48:31.803 10:43:39 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:48:31.803 10:43:39 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:31.803 10:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:48:31.803 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:48:31.803 10:43:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:31.803 10:43:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:31.803 10:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:48:32.061 ************************************ 00:48:32.061 START TEST nvmf_example 00:48:32.061 ************************************ 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:48:32.061 * Looking for test storage... 00:48:32.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:32.061 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:48:32.062 Cannot find device "nvmf_init_br" 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:48:32.062 Cannot find device "nvmf_tgt_br" 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:48:32.062 Cannot find device "nvmf_tgt_br2" 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:48:32.062 Cannot find device "nvmf_init_br" 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:48:32.062 10:43:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:48:32.319 Cannot find device "nvmf_tgt_br" 00:48:32.319 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:48:32.319 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:48:32.320 Cannot find device "nvmf_tgt_br2" 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:48:32.320 Cannot find device "nvmf_br" 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:48:32.320 Cannot find device "nvmf_init_if" 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:32.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:32.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:32.320 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:48:32.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:32.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:48:32.578 00:48:32.578 --- 10.0.0.2 ping statistics --- 00:48:32.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:32.578 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:48:32.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:32.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:48:32.578 00:48:32.578 --- 10.0.0.3 ping statistics --- 00:48:32.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:32.578 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:32.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:32.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:48:32.578 00:48:32.578 --- 10.0.0.1 ping statistics --- 00:48:32.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:32.578 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=78098 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 78098 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 78098 ']' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:32.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:32.578 10:43:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.530 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:33.788 10:43:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.788 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:48:33.788 10:43:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:48:43.817 Initializing NVMe Controllers 00:48:43.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:43.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:48:43.817 Initialization complete. Launching workers. 00:48:43.817 ======================================================== 00:48:43.817 Latency(us) 00:48:43.817 Device Information : IOPS MiB/s Average min max 00:48:43.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17765.80 69.40 3603.21 578.07 24158.18 00:48:43.817 ======================================================== 00:48:43.817 Total : 17765.80 69.40 3603.21 578.07 24158.18 00:48:43.817 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:48:43.817 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:48:43.817 rmmod nvme_tcp 00:48:44.075 rmmod nvme_fabrics 00:48:44.075 rmmod nvme_keyring 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 78098 ']' 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 78098 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 78098 ']' 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 78098 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78098 00:48:44.075 killing process with pid 78098 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78098' 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 78098 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 78098 00:48:44.075 nvmf threads initialize successfully 00:48:44.075 bdev subsystem init successfully 00:48:44.075 created a nvmf target service 00:48:44.075 create targets's poll groups done 00:48:44.075 all subsystems of target started 00:48:44.075 nvmf target is running 00:48:44.075 all subsystems of target stopped 00:48:44.075 destroy targets's poll groups done 00:48:44.075 destroyed the nvmf target service 00:48:44.075 bdev subsystem finish successfully 00:48:44.075 nvmf threads destroy successfully 00:48:44.075 10:43:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:44.075 10:43:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:44.333 10:43:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:48:44.333 10:43:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:48:44.333 10:43:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:44.333 10:43:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:44.333 00:48:44.333 real 0m12.376s 00:48:44.333 user 0m43.520s 00:48:44.333 sys 0m2.410s 00:48:44.333 10:43:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:44.333 ************************************ 00:48:44.333 10:43:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:48:44.333 END TEST nvmf_example 00:48:44.333 ************************************ 00:48:44.333 10:43:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:48:44.333 10:43:52 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:48:44.333 10:43:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:44.333 10:43:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:44.333 10:43:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:48:44.333 ************************************ 00:48:44.333 START TEST nvmf_filesystem 00:48:44.333 ************************************ 00:48:44.333 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:48:44.593 * Looking for test storage... 00:48:44.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:48:44.593 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:48:44.594 #define SPDK_CONFIG_H 00:48:44.594 #define SPDK_CONFIG_APPS 1 00:48:44.594 #define SPDK_CONFIG_ARCH native 00:48:44.594 #undef SPDK_CONFIG_ASAN 00:48:44.594 #define SPDK_CONFIG_AVAHI 1 00:48:44.594 #undef SPDK_CONFIG_CET 00:48:44.594 #define SPDK_CONFIG_COVERAGE 1 00:48:44.594 #define SPDK_CONFIG_CROSS_PREFIX 00:48:44.594 #undef SPDK_CONFIG_CRYPTO 00:48:44.594 #undef SPDK_CONFIG_CRYPTO_MLX5 00:48:44.594 #undef SPDK_CONFIG_CUSTOMOCF 00:48:44.594 #undef SPDK_CONFIG_DAOS 00:48:44.594 #define SPDK_CONFIG_DAOS_DIR 00:48:44.594 #define SPDK_CONFIG_DEBUG 1 00:48:44.594 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:48:44.594 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:48:44.594 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:48:44.594 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:48:44.594 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:48:44.594 #undef SPDK_CONFIG_DPDK_UADK 00:48:44.594 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:48:44.594 #define SPDK_CONFIG_EXAMPLES 1 00:48:44.594 #undef SPDK_CONFIG_FC 00:48:44.594 #define SPDK_CONFIG_FC_PATH 00:48:44.594 #define SPDK_CONFIG_FIO_PLUGIN 1 00:48:44.594 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:48:44.594 #undef SPDK_CONFIG_FUSE 00:48:44.594 #undef SPDK_CONFIG_FUZZER 00:48:44.594 #define SPDK_CONFIG_FUZZER_LIB 00:48:44.594 #define SPDK_CONFIG_GOLANG 1 00:48:44.594 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:48:44.594 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:48:44.594 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:48:44.594 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:48:44.594 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:48:44.594 #undef SPDK_CONFIG_HAVE_LIBBSD 00:48:44.594 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:48:44.594 #define SPDK_CONFIG_IDXD 1 00:48:44.594 #define SPDK_CONFIG_IDXD_KERNEL 1 00:48:44.594 #undef SPDK_CONFIG_IPSEC_MB 00:48:44.594 #define SPDK_CONFIG_IPSEC_MB_DIR 00:48:44.594 #define SPDK_CONFIG_ISAL 1 00:48:44.594 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:48:44.594 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:48:44.594 #define SPDK_CONFIG_LIBDIR 00:48:44.594 #undef SPDK_CONFIG_LTO 00:48:44.594 #define SPDK_CONFIG_MAX_LCORES 128 00:48:44.594 #define SPDK_CONFIG_NVME_CUSE 1 00:48:44.594 #undef SPDK_CONFIG_OCF 00:48:44.594 #define SPDK_CONFIG_OCF_PATH 00:48:44.594 #define SPDK_CONFIG_OPENSSL_PATH 00:48:44.594 #undef SPDK_CONFIG_PGO_CAPTURE 00:48:44.594 #define SPDK_CONFIG_PGO_DIR 00:48:44.594 #undef SPDK_CONFIG_PGO_USE 00:48:44.594 #define SPDK_CONFIG_PREFIX /usr/local 00:48:44.594 #undef SPDK_CONFIG_RAID5F 00:48:44.594 #undef SPDK_CONFIG_RBD 00:48:44.594 #define SPDK_CONFIG_RDMA 1 00:48:44.594 #define SPDK_CONFIG_RDMA_PROV verbs 00:48:44.594 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:48:44.594 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:48:44.594 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:48:44.594 #define SPDK_CONFIG_SHARED 1 00:48:44.594 #undef SPDK_CONFIG_SMA 00:48:44.594 #define SPDK_CONFIG_TESTS 1 00:48:44.594 #undef SPDK_CONFIG_TSAN 00:48:44.594 #define SPDK_CONFIG_UBLK 1 00:48:44.594 #define SPDK_CONFIG_UBSAN 1 00:48:44.594 #undef SPDK_CONFIG_UNIT_TESTS 00:48:44.594 #undef SPDK_CONFIG_URING 00:48:44.594 #define SPDK_CONFIG_URING_PATH 00:48:44.594 #undef SPDK_CONFIG_URING_ZNS 00:48:44.594 #define SPDK_CONFIG_USDT 1 00:48:44.594 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:48:44.594 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:48:44.594 #undef SPDK_CONFIG_VFIO_USER 00:48:44.594 #define SPDK_CONFIG_VFIO_USER_DIR 00:48:44.594 #define SPDK_CONFIG_VHOST 1 00:48:44.594 #define SPDK_CONFIG_VIRTIO 1 00:48:44.594 #undef SPDK_CONFIG_VTUNE 00:48:44.594 #define SPDK_CONFIG_VTUNE_DIR 00:48:44.594 #define SPDK_CONFIG_WERROR 1 00:48:44.594 #define SPDK_CONFIG_WPDK_DIR 00:48:44.594 #undef SPDK_CONFIG_XNVME 00:48:44.594 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:48:44.594 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:48:44.595 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 78340 ]] 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 78340 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.p3xjYB 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.p3xjYB/tests/target /tmp/spdk.p3xjYB 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13054013440 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5992321024 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13054013440 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5992321024 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267736064 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=155648 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:48:44.596 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96426635264 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3276144640 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:48:44.597 * Looking for test storage... 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13054013440 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:44.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:44.597 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:44.855 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:48:44.856 Cannot find device "nvmf_tgt_br" 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:48:44.856 Cannot find device "nvmf_tgt_br2" 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:48:44.856 Cannot find device "nvmf_tgt_br" 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:48:44.856 Cannot find device "nvmf_tgt_br2" 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:44.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:44.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:44.856 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:45.114 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:48:45.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:45.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:48:45.114 00:48:45.114 --- 10.0.0.2 ping statistics --- 00:48:45.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:45.115 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:48:45.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:45.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:48:45.115 00:48:45.115 --- 10.0.0.3 ping statistics --- 00:48:45.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:45.115 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:45.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:45.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:48:45.115 00:48:45.115 --- 10.0.0.1 ping statistics --- 00:48:45.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:45.115 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:48:45.115 ************************************ 00:48:45.115 START TEST nvmf_filesystem_no_in_capsule 00:48:45.115 ************************************ 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:45.115 10:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78499 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78499 00:48:45.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 78499 ']' 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:45.115 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:45.372 [2024-07-22 10:43:53.055220] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:45.372 [2024-07-22 10:43:53.055343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:45.372 [2024-07-22 10:43:53.175398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:45.372 [2024-07-22 10:43:53.201219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:45.372 [2024-07-22 10:43:53.245908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:45.372 [2024-07-22 10:43:53.246101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:45.372 [2024-07-22 10:43:53.246243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:45.372 [2024-07-22 10:43:53.246300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:45.372 [2024-07-22 10:43:53.246326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:45.372 [2024-07-22 10:43:53.246547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:45.372 [2024-07-22 10:43:53.246730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:48:45.372 [2024-07-22 10:43:53.247230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:45.372 [2024-07-22 10:43:53.247792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 [2024-07-22 10:43:53.957415] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:46.303 10:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 Malloc1 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 [2024-07-22 10:43:54.120177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:48:46.303 { 00:48:46.303 "aliases": [ 00:48:46.303 "2bbfda43-a0f2-4977-b554-c797dd35b811" 00:48:46.303 ], 00:48:46.303 "assigned_rate_limits": { 00:48:46.303 "r_mbytes_per_sec": 0, 00:48:46.303 "rw_ios_per_sec": 0, 00:48:46.303 "rw_mbytes_per_sec": 0, 00:48:46.303 "w_mbytes_per_sec": 0 00:48:46.303 }, 00:48:46.303 "block_size": 512, 00:48:46.303 "claim_type": "exclusive_write", 00:48:46.303 "claimed": true, 00:48:46.303 "driver_specific": {}, 00:48:46.303 "memory_domains": [ 00:48:46.303 { 00:48:46.303 "dma_device_id": "system", 00:48:46.303 "dma_device_type": 1 00:48:46.303 }, 00:48:46.303 { 00:48:46.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:48:46.303 "dma_device_type": 2 00:48:46.303 } 00:48:46.303 ], 00:48:46.303 "name": "Malloc1", 00:48:46.303 "num_blocks": 1048576, 00:48:46.303 "product_name": "Malloc disk", 00:48:46.303 "supported_io_types": { 00:48:46.303 "abort": true, 00:48:46.303 "compare": false, 00:48:46.303 "compare_and_write": false, 00:48:46.303 "copy": true, 00:48:46.303 "flush": true, 00:48:46.303 "get_zone_info": false, 00:48:46.303 "nvme_admin": false, 00:48:46.303 "nvme_io": false, 00:48:46.303 "nvme_io_md": false, 00:48:46.303 "nvme_iov_md": false, 00:48:46.303 "read": true, 00:48:46.303 "reset": true, 00:48:46.303 "seek_data": false, 00:48:46.303 "seek_hole": false, 00:48:46.303 "unmap": true, 00:48:46.303 "write": true, 00:48:46.303 "write_zeroes": true, 00:48:46.303 "zcopy": true, 00:48:46.303 "zone_append": false, 00:48:46.303 "zone_management": false 00:48:46.303 }, 00:48:46.303 "uuid": "2bbfda43-a0f2-4977-b554-c797dd35b811", 00:48:46.303 "zoned": false 00:48:46.303 } 00:48:46.303 ]' 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:48:46.303 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:48:46.561 10:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:48:49.085 10:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:50.016 ************************************ 00:48:50.016 START TEST filesystem_ext4 00:48:50.016 ************************************ 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:48:50.016 mke2fs 1.46.5 (30-Dec-2021) 00:48:50.016 Discarding device blocks: 0/522240 done 00:48:50.016 Creating filesystem with 522240 1k blocks and 130560 inodes 00:48:50.016 Filesystem UUID: f44eced8-b26e-483d-a1fb-0b6aa0a9195d 00:48:50.016 Superblock backups stored on blocks: 00:48:50.016 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:48:50.016 00:48:50.016 Allocating group tables: 0/64 done 00:48:50.016 Writing inode tables: 0/64 done 00:48:50.016 Creating journal (8192 blocks): done 00:48:50.016 Writing superblocks and filesystem accounting information: 0/64 done 00:48:50.016 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:48:50.016 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 78499 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:48:50.274 ************************************ 00:48:50.274 END TEST filesystem_ext4 00:48:50.274 ************************************ 00:48:50.274 00:48:50.274 real 0m0.363s 00:48:50.274 user 0m0.037s 00:48:50.274 sys 0m0.072s 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:50.274 10:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:50.274 ************************************ 00:48:50.274 START TEST filesystem_btrfs 00:48:50.274 ************************************ 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:48:50.274 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:48:50.532 btrfs-progs v6.6.2 00:48:50.532 See https://btrfs.readthedocs.io for more information. 00:48:50.532 00:48:50.532 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:48:50.532 NOTE: several default settings have changed in version 5.15, please make sure 00:48:50.532 this does not affect your deployments: 00:48:50.532 - DUP for metadata (-m dup) 00:48:50.532 - enabled no-holes (-O no-holes) 00:48:50.532 - enabled free-space-tree (-R free-space-tree) 00:48:50.532 00:48:50.532 Label: (null) 00:48:50.532 UUID: 0e0f8b3f-104e-4f22-944c-a18b2d582989 00:48:50.532 Node size: 16384 00:48:50.532 Sector size: 4096 00:48:50.532 Filesystem size: 510.00MiB 00:48:50.532 Block group profiles: 00:48:50.532 Data: single 8.00MiB 00:48:50.532 Metadata: DUP 32.00MiB 00:48:50.532 System: DUP 8.00MiB 00:48:50.532 SSD detected: yes 00:48:50.532 Zoned device: no 00:48:50.532 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:48:50.532 Runtime features: free-space-tree 00:48:50.532 Checksum: crc32c 00:48:50.532 Number of devices: 1 00:48:50.532 Devices: 00:48:50.532 ID SIZE PATH 00:48:50.532 1 510.00MiB /dev/nvme0n1p1 00:48:50.532 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 78499 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:48:50.532 00:48:50.532 real 0m0.302s 00:48:50.532 user 0m0.033s 00:48:50.532 sys 0m0.086s 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:48:50.532 ************************************ 00:48:50.532 END TEST filesystem_btrfs 00:48:50.532 ************************************ 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:50.532 ************************************ 00:48:50.532 START TEST filesystem_xfs 00:48:50.532 ************************************ 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:48:50.532 10:43:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:48:50.789 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:48:50.789 = sectsz=512 attr=2, projid32bit=1 00:48:50.789 = crc=1 finobt=1, sparse=1, rmapbt=0 00:48:50.789 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:48:50.789 data = bsize=4096 blocks=130560, imaxpct=25 00:48:50.789 = sunit=0 swidth=0 blks 00:48:50.789 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:48:50.789 log =internal log bsize=4096 blocks=16384, version=2 00:48:50.789 = sectsz=512 sunit=0 blks, lazy-count=1 00:48:50.789 realtime =none extsz=4096 blocks=0, rtextents=0 00:48:51.354 Discarding blocks...Done. 00:48:51.354 10:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:48:51.354 10:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 78499 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:48:53.880 ************************************ 00:48:53.880 END TEST filesystem_xfs 00:48:53.880 ************************************ 00:48:53.880 00:48:53.880 real 0m3.059s 00:48:53.880 user 0m0.035s 00:48:53.880 sys 0m0.085s 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:48:53.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 78499 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 78499 ']' 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 78499 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78499 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78499' 00:48:53.880 killing process with pid 78499 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 78499 00:48:53.880 10:44:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 78499 00:48:54.139 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:48:54.139 00:48:54.139 real 0m9.064s 00:48:54.139 user 0m34.119s 00:48:54.139 sys 0m2.086s 00:48:54.139 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:54.139 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:54.139 ************************************ 00:48:54.139 END TEST nvmf_filesystem_no_in_capsule 00:48:54.139 ************************************ 00:48:54.398 10:44:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:48:54.398 10:44:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:48:54.398 10:44:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:54.398 10:44:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:54.398 10:44:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:48:54.398 ************************************ 00:48:54.399 START TEST nvmf_filesystem_in_capsule 00:48:54.399 ************************************ 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78810 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78810 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 78810 ']' 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:54.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:54.399 10:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:54.399 [2024-07-22 10:44:02.189374] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:48:54.399 [2024-07-22 10:44:02.189443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:54.399 [2024-07-22 10:44:02.308967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:54.657 [2024-07-22 10:44:02.333145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:54.657 [2024-07-22 10:44:02.373993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:54.657 [2024-07-22 10:44:02.374249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:54.657 [2024-07-22 10:44:02.374473] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:54.657 [2024-07-22 10:44:02.374519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:54.657 [2024-07-22 10:44:02.374544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:54.657 [2024-07-22 10:44:02.374734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:54.657 [2024-07-22 10:44:02.374923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:48:54.657 [2024-07-22 10:44:02.375003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:54.657 [2024-07-22 10:44:02.375007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.222 [2024-07-22 10:44:03.102813] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.222 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.479 Malloc1 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.479 [2024-07-22 10:44:03.270944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:48:55.479 { 00:48:55.479 "aliases": [ 00:48:55.479 "fc05da76-4a08-4560-8531-0856084178d9" 00:48:55.479 ], 00:48:55.479 "assigned_rate_limits": { 00:48:55.479 "r_mbytes_per_sec": 0, 00:48:55.479 "rw_ios_per_sec": 0, 00:48:55.479 "rw_mbytes_per_sec": 0, 00:48:55.479 "w_mbytes_per_sec": 0 00:48:55.479 }, 00:48:55.479 "block_size": 512, 00:48:55.479 "claim_type": "exclusive_write", 00:48:55.479 "claimed": true, 00:48:55.479 "driver_specific": {}, 00:48:55.479 "memory_domains": [ 00:48:55.479 { 00:48:55.479 "dma_device_id": "system", 00:48:55.479 "dma_device_type": 1 00:48:55.479 }, 00:48:55.479 { 00:48:55.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:48:55.479 "dma_device_type": 2 00:48:55.479 } 00:48:55.479 ], 00:48:55.479 "name": "Malloc1", 00:48:55.479 "num_blocks": 1048576, 00:48:55.479 "product_name": "Malloc disk", 00:48:55.479 "supported_io_types": { 00:48:55.479 "abort": true, 00:48:55.479 "compare": false, 00:48:55.479 "compare_and_write": false, 00:48:55.479 "copy": true, 00:48:55.479 "flush": true, 00:48:55.479 "get_zone_info": false, 00:48:55.479 "nvme_admin": false, 00:48:55.479 "nvme_io": false, 00:48:55.479 "nvme_io_md": false, 00:48:55.479 "nvme_iov_md": false, 00:48:55.479 "read": true, 00:48:55.479 "reset": true, 00:48:55.479 "seek_data": false, 00:48:55.479 "seek_hole": false, 00:48:55.479 "unmap": true, 00:48:55.479 "write": true, 00:48:55.479 "write_zeroes": true, 00:48:55.479 "zcopy": true, 00:48:55.479 "zone_append": false, 00:48:55.479 "zone_management": false 00:48:55.479 }, 00:48:55.479 "uuid": "fc05da76-4a08-4560-8531-0856084178d9", 00:48:55.479 "zoned": false 00:48:55.479 } 00:48:55.479 ]' 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:48:55.479 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:48:55.737 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:48:55.737 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:48:55.737 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:48:55.737 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:48:55.737 10:44:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:48:58.267 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:48:58.268 10:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:59.201 ************************************ 00:48:59.201 START TEST filesystem_in_capsule_ext4 00:48:59.201 ************************************ 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:48:59.201 mke2fs 1.46.5 (30-Dec-2021) 00:48:59.201 Discarding device blocks: 0/522240 done 00:48:59.201 Creating filesystem with 522240 1k blocks and 130560 inodes 00:48:59.201 Filesystem UUID: ccc0ad7e-6c12-4afe-bfbb-7d080a2218ca 00:48:59.201 Superblock backups stored on blocks: 00:48:59.201 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:48:59.201 00:48:59.201 Allocating group tables: 0/64 done 00:48:59.201 Writing inode tables: 0/64 done 00:48:59.201 Creating journal (8192 blocks): done 00:48:59.201 Writing superblocks and filesystem accounting information: 0/64 done 00:48:59.201 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:48:59.201 10:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:48:59.201 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:48:59.201 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:48:59.201 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:48:59.201 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:48:59.201 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:48:59.201 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 78810 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:48:59.465 ************************************ 00:48:59.465 END TEST filesystem_in_capsule_ext4 00:48:59.465 ************************************ 00:48:59.465 00:48:59.465 real 0m0.376s 00:48:59.465 user 0m0.038s 00:48:59.465 sys 0m0.077s 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:59.465 ************************************ 00:48:59.465 START TEST filesystem_in_capsule_btrfs 00:48:59.465 ************************************ 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:48:59.465 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:48:59.466 btrfs-progs v6.6.2 00:48:59.466 See https://btrfs.readthedocs.io for more information. 00:48:59.466 00:48:59.466 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:48:59.466 NOTE: several default settings have changed in version 5.15, please make sure 00:48:59.466 this does not affect your deployments: 00:48:59.466 - DUP for metadata (-m dup) 00:48:59.466 - enabled no-holes (-O no-holes) 00:48:59.466 - enabled free-space-tree (-R free-space-tree) 00:48:59.466 00:48:59.466 Label: (null) 00:48:59.466 UUID: ad75677c-adec-47ba-ac01-44eaee309f33 00:48:59.466 Node size: 16384 00:48:59.466 Sector size: 4096 00:48:59.466 Filesystem size: 510.00MiB 00:48:59.466 Block group profiles: 00:48:59.466 Data: single 8.00MiB 00:48:59.466 Metadata: DUP 32.00MiB 00:48:59.466 System: DUP 8.00MiB 00:48:59.466 SSD detected: yes 00:48:59.466 Zoned device: no 00:48:59.466 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:48:59.466 Runtime features: free-space-tree 00:48:59.466 Checksum: crc32c 00:48:59.466 Number of devices: 1 00:48:59.466 Devices: 00:48:59.466 ID SIZE PATH 00:48:59.466 1 510.00MiB /dev/nvme0n1p1 00:48:59.466 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:48:59.466 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 78810 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:48:59.724 00:48:59.724 real 0m0.229s 00:48:59.724 user 0m0.031s 00:48:59.724 sys 0m0.083s 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:59.724 ************************************ 00:48:59.724 END TEST filesystem_in_capsule_btrfs 00:48:59.724 ************************************ 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:48:59.724 ************************************ 00:48:59.724 START TEST filesystem_in_capsule_xfs 00:48:59.724 ************************************ 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:48:59.724 10:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:48:59.724 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:48:59.724 = sectsz=512 attr=2, projid32bit=1 00:48:59.724 = crc=1 finobt=1, sparse=1, rmapbt=0 00:48:59.724 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:48:59.724 data = bsize=4096 blocks=130560, imaxpct=25 00:48:59.724 = sunit=0 swidth=0 blks 00:48:59.724 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:48:59.724 log =internal log bsize=4096 blocks=16384, version=2 00:48:59.724 = sectsz=512 sunit=0 blks, lazy-count=1 00:48:59.724 realtime =none extsz=4096 blocks=0, rtextents=0 00:49:00.655 Discarding blocks...Done. 00:49:00.655 10:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:49:00.655 10:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 78810 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:49:02.584 ************************************ 00:49:02.584 END TEST filesystem_in_capsule_xfs 00:49:02.584 ************************************ 00:49:02.584 00:49:02.584 real 0m2.632s 00:49:02.584 user 0m0.040s 00:49:02.584 sys 0m0.078s 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:49:02.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 78810 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 78810 ']' 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 78810 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78810 00:49:02.584 killing process with pid 78810 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78810' 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 78810 00:49:02.584 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 78810 00:49:02.842 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:49:02.842 00:49:02.842 real 0m8.596s 00:49:02.842 user 0m32.496s 00:49:02.842 sys 0m1.885s 00:49:02.842 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:02.842 ************************************ 00:49:02.842 END TEST nvmf_filesystem_in_capsule 00:49:02.842 ************************************ 00:49:02.842 10:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:03.100 rmmod nvme_tcp 00:49:03.100 rmmod nvme_fabrics 00:49:03.100 rmmod nvme_keyring 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:49:03.100 ************************************ 00:49:03.100 END TEST nvmf_filesystem 00:49:03.100 ************************************ 00:49:03.100 00:49:03.100 real 0m18.769s 00:49:03.100 user 1m6.945s 00:49:03.100 sys 0m4.530s 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:03.100 10:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:49:03.100 10:44:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:49:03.100 10:44:11 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:49:03.100 10:44:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:49:03.100 10:44:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:03.100 10:44:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:03.100 ************************************ 00:49:03.100 START TEST nvmf_target_discovery 00:49:03.100 ************************************ 00:49:03.100 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:49:03.359 * Looking for test storage... 00:49:03.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:03.359 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:49:03.360 Cannot find device "nvmf_tgt_br" 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:49:03.360 Cannot find device "nvmf_tgt_br2" 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:49:03.360 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:49:03.619 Cannot find device "nvmf_tgt_br" 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:49:03.619 Cannot find device "nvmf_tgt_br2" 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:03.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:03.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:03.619 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:49:03.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:03.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:49:03.877 00:49:03.877 --- 10.0.0.2 ping statistics --- 00:49:03.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.877 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:49:03.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:03.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:49:03.877 00:49:03.877 --- 10.0.0.3 ping statistics --- 00:49:03.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.877 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:03.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:03.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:49:03.877 00:49:03.877 --- 10.0.0.1 ping statistics --- 00:49:03.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.877 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=79261 00:49:03.877 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 79261 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 79261 ']' 00:49:03.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:03.878 10:44:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:03.878 [2024-07-22 10:44:11.660846] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:49:03.878 [2024-07-22 10:44:11.660918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:03.878 [2024-07-22 10:44:11.780240] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:49:03.878 [2024-07-22 10:44:11.803302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:04.136 [2024-07-22 10:44:11.846613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:04.136 [2024-07-22 10:44:11.846865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:04.136 [2024-07-22 10:44:11.846953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:04.136 [2024-07-22 10:44:11.846998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:04.136 [2024-07-22 10:44:11.847023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:04.136 [2024-07-22 10:44:11.847243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:04.136 [2024-07-22 10:44:11.847429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:49:04.136 [2024-07-22 10:44:11.848251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:49:04.136 [2024-07-22 10:44:11.848252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.702 [2024-07-22 10:44:12.578401] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.702 Null1 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.702 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.970 [2024-07-22 10:44:12.660539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:49:04.970 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 Null2 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 Null3 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 Null4 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.971 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 4420 00:49:05.229 00:49:05.229 Discovery Log Number of Records 6, Generation counter 6 00:49:05.229 =====Discovery Log Entry 0====== 00:49:05.229 trtype: tcp 00:49:05.229 adrfam: ipv4 00:49:05.229 subtype: current discovery subsystem 00:49:05.229 treq: not required 00:49:05.229 portid: 0 00:49:05.229 trsvcid: 4420 00:49:05.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:05.229 traddr: 10.0.0.2 00:49:05.229 eflags: explicit discovery connections, duplicate discovery information 00:49:05.229 sectype: none 00:49:05.229 =====Discovery Log Entry 1====== 00:49:05.229 trtype: tcp 00:49:05.229 adrfam: ipv4 00:49:05.229 subtype: nvme subsystem 00:49:05.229 treq: not required 00:49:05.229 portid: 0 00:49:05.229 trsvcid: 4420 00:49:05.229 subnqn: nqn.2016-06.io.spdk:cnode1 00:49:05.229 traddr: 10.0.0.2 00:49:05.229 eflags: none 00:49:05.229 sectype: none 00:49:05.229 =====Discovery Log Entry 2====== 00:49:05.229 trtype: tcp 00:49:05.229 adrfam: ipv4 00:49:05.229 subtype: nvme subsystem 00:49:05.229 treq: not required 00:49:05.229 portid: 0 00:49:05.229 trsvcid: 4420 00:49:05.229 subnqn: nqn.2016-06.io.spdk:cnode2 00:49:05.229 traddr: 10.0.0.2 00:49:05.229 eflags: none 00:49:05.229 sectype: none 00:49:05.229 =====Discovery Log Entry 3====== 00:49:05.229 trtype: tcp 00:49:05.229 adrfam: ipv4 00:49:05.229 subtype: nvme subsystem 00:49:05.229 treq: not required 00:49:05.229 portid: 0 00:49:05.229 trsvcid: 4420 00:49:05.229 subnqn: nqn.2016-06.io.spdk:cnode3 00:49:05.229 traddr: 10.0.0.2 00:49:05.229 eflags: none 00:49:05.229 sectype: none 00:49:05.229 =====Discovery Log Entry 4====== 00:49:05.229 trtype: tcp 00:49:05.229 adrfam: ipv4 00:49:05.229 subtype: nvme subsystem 00:49:05.229 treq: not required 00:49:05.229 portid: 0 00:49:05.229 trsvcid: 4420 00:49:05.229 subnqn: nqn.2016-06.io.spdk:cnode4 00:49:05.229 traddr: 10.0.0.2 00:49:05.229 eflags: none 00:49:05.229 sectype: none 00:49:05.229 =====Discovery Log Entry 5====== 00:49:05.229 trtype: tcp 00:49:05.229 adrfam: ipv4 00:49:05.229 subtype: discovery subsystem referral 00:49:05.229 treq: not required 00:49:05.229 portid: 0 00:49:05.229 trsvcid: 4430 00:49:05.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:05.229 traddr: 10.0.0.2 00:49:05.229 eflags: none 00:49:05.229 sectype: none 00:49:05.229 Perform nvmf subsystem discovery via RPC 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 [ 00:49:05.229 { 00:49:05.229 "allow_any_host": true, 00:49:05.229 "hosts": [], 00:49:05.229 "listen_addresses": [ 00:49:05.229 { 00:49:05.229 "adrfam": "IPv4", 00:49:05.229 "traddr": "10.0.0.2", 00:49:05.229 "trsvcid": "4420", 00:49:05.229 "trtype": "TCP" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:05.229 "subtype": "Discovery" 00:49:05.229 }, 00:49:05.229 { 00:49:05.229 "allow_any_host": true, 00:49:05.229 "hosts": [], 00:49:05.229 "listen_addresses": [ 00:49:05.229 { 00:49:05.229 "adrfam": "IPv4", 00:49:05.229 "traddr": "10.0.0.2", 00:49:05.229 "trsvcid": "4420", 00:49:05.229 "trtype": "TCP" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "max_cntlid": 65519, 00:49:05.229 "max_namespaces": 32, 00:49:05.229 "min_cntlid": 1, 00:49:05.229 "model_number": "SPDK bdev Controller", 00:49:05.229 "namespaces": [ 00:49:05.229 { 00:49:05.229 "bdev_name": "Null1", 00:49:05.229 "name": "Null1", 00:49:05.229 "nguid": "D15221FFC3F64B2F9DFE461872979EFD", 00:49:05.229 "nsid": 1, 00:49:05.229 "uuid": "d15221ff-c3f6-4b2f-9dfe-461872979efd" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:49:05.229 "serial_number": "SPDK00000000000001", 00:49:05.229 "subtype": "NVMe" 00:49:05.229 }, 00:49:05.229 { 00:49:05.229 "allow_any_host": true, 00:49:05.229 "hosts": [], 00:49:05.229 "listen_addresses": [ 00:49:05.229 { 00:49:05.229 "adrfam": "IPv4", 00:49:05.229 "traddr": "10.0.0.2", 00:49:05.229 "trsvcid": "4420", 00:49:05.229 "trtype": "TCP" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "max_cntlid": 65519, 00:49:05.229 "max_namespaces": 32, 00:49:05.229 "min_cntlid": 1, 00:49:05.229 "model_number": "SPDK bdev Controller", 00:49:05.229 "namespaces": [ 00:49:05.229 { 00:49:05.229 "bdev_name": "Null2", 00:49:05.229 "name": "Null2", 00:49:05.229 "nguid": "B7624BF93F1A43528227D94586CDB887", 00:49:05.229 "nsid": 1, 00:49:05.229 "uuid": "b7624bf9-3f1a-4352-8227-d94586cdb887" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:49:05.229 "serial_number": "SPDK00000000000002", 00:49:05.229 "subtype": "NVMe" 00:49:05.229 }, 00:49:05.229 { 00:49:05.229 "allow_any_host": true, 00:49:05.229 "hosts": [], 00:49:05.229 "listen_addresses": [ 00:49:05.229 { 00:49:05.229 "adrfam": "IPv4", 00:49:05.229 "traddr": "10.0.0.2", 00:49:05.229 "trsvcid": "4420", 00:49:05.229 "trtype": "TCP" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "max_cntlid": 65519, 00:49:05.229 "max_namespaces": 32, 00:49:05.229 "min_cntlid": 1, 00:49:05.229 "model_number": "SPDK bdev Controller", 00:49:05.229 "namespaces": [ 00:49:05.229 { 00:49:05.229 "bdev_name": "Null3", 00:49:05.229 "name": "Null3", 00:49:05.229 "nguid": "AFE20F5809CD4128AB55301683D936EE", 00:49:05.229 "nsid": 1, 00:49:05.229 "uuid": "afe20f58-09cd-4128-ab55-301683d936ee" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:49:05.229 "serial_number": "SPDK00000000000003", 00:49:05.229 "subtype": "NVMe" 00:49:05.229 }, 00:49:05.229 { 00:49:05.229 "allow_any_host": true, 00:49:05.229 "hosts": [], 00:49:05.229 "listen_addresses": [ 00:49:05.229 { 00:49:05.229 "adrfam": "IPv4", 00:49:05.229 "traddr": "10.0.0.2", 00:49:05.229 "trsvcid": "4420", 00:49:05.229 "trtype": "TCP" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "max_cntlid": 65519, 00:49:05.229 "max_namespaces": 32, 00:49:05.229 "min_cntlid": 1, 00:49:05.229 "model_number": "SPDK bdev Controller", 00:49:05.229 "namespaces": [ 00:49:05.229 { 00:49:05.229 "bdev_name": "Null4", 00:49:05.229 "name": "Null4", 00:49:05.229 "nguid": "BD10384E14C34A09B73B71D61C0F7000", 00:49:05.229 "nsid": 1, 00:49:05.229 "uuid": "bd10384e-14c3-4a09-b73b-71d61c0f7000" 00:49:05.229 } 00:49:05.229 ], 00:49:05.229 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:49:05.229 "serial_number": "SPDK00000000000004", 00:49:05.229 "subtype": "NVMe" 00:49:05.229 } 00:49:05.229 ] 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.229 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:05.230 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:05.486 rmmod nvme_tcp 00:49:05.486 rmmod nvme_fabrics 00:49:05.486 rmmod nvme_keyring 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 79261 ']' 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 79261 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 79261 ']' 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 79261 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:49:05.486 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:05.487 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79261 00:49:05.487 killing process with pid 79261 00:49:05.487 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:49:05.487 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:49:05.487 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79261' 00:49:05.487 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 79261 00:49:05.487 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 79261 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:49:05.743 00:49:05.743 real 0m2.467s 00:49:05.743 user 0m6.661s 00:49:05.743 sys 0m0.704s 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:05.743 ************************************ 00:49:05.743 10:44:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:05.743 END TEST nvmf_target_discovery 00:49:05.743 ************************************ 00:49:05.743 10:44:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:49:05.743 10:44:13 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:49:05.743 10:44:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:49:05.743 10:44:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:05.743 10:44:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:05.743 ************************************ 00:49:05.743 START TEST nvmf_referrals 00:49:05.743 ************************************ 00:49:05.743 10:44:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:49:06.000 * Looking for test storage... 00:49:06.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:49:06.000 Cannot find device "nvmf_tgt_br" 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:49:06.000 Cannot find device "nvmf_tgt_br2" 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:49:06.000 Cannot find device "nvmf_tgt_br" 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:49:06.000 Cannot find device "nvmf_tgt_br2" 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:06.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:06.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:49:06.000 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:06.257 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:06.257 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:06.257 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:06.257 10:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:49:06.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:06.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:49:06.257 00:49:06.257 --- 10.0.0.2 ping statistics --- 00:49:06.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:06.257 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:49:06.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:06.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:49:06.257 00:49:06.257 --- 10.0.0.3 ping statistics --- 00:49:06.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:06.257 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:06.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:06.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:49:06.257 00:49:06.257 --- 10.0.0.1 ping statistics --- 00:49:06.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:06.257 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:06.257 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=79496 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 79496 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 79496 ']' 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:06.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:06.514 10:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:06.514 [2024-07-22 10:44:14.249384] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:49:06.514 [2024-07-22 10:44:14.249453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:06.514 [2024-07-22 10:44:14.369546] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:49:06.514 [2024-07-22 10:44:14.393881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:06.514 [2024-07-22 10:44:14.437454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:06.515 [2024-07-22 10:44:14.437742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:06.515 [2024-07-22 10:44:14.437836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:06.515 [2024-07-22 10:44:14.437882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:06.515 [2024-07-22 10:44:14.437907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:06.515 [2024-07-22 10:44:14.438122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:06.515 [2024-07-22 10:44:14.438329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:49:06.515 [2024-07-22 10:44:14.438849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:49:06.515 [2024-07-22 10:44:14.438849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 [2024-07-22 10:44:15.144447] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 [2024-07-22 10:44:15.167646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:07.475 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:07.733 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.991 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:49:08.249 10:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:49:08.249 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:08.507 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:08.508 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:08.508 rmmod nvme_tcp 00:49:08.508 rmmod nvme_fabrics 00:49:08.766 rmmod nvme_keyring 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 79496 ']' 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 79496 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 79496 ']' 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 79496 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79496 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:49:08.766 killing process with pid 79496 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79496' 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 79496 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 79496 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:08.766 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:09.024 10:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:49:09.024 00:49:09.024 real 0m3.228s 00:49:09.024 user 0m9.695s 00:49:09.024 sys 0m1.121s 00:49:09.024 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:09.024 10:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:49:09.024 ************************************ 00:49:09.024 END TEST nvmf_referrals 00:49:09.024 ************************************ 00:49:09.024 10:44:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:49:09.024 10:44:16 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:49:09.024 10:44:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:49:09.024 10:44:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:09.024 10:44:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:09.024 ************************************ 00:49:09.024 START TEST nvmf_connect_disconnect 00:49:09.024 ************************************ 00:49:09.024 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:49:09.283 * Looking for test storage... 00:49:09.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:09.283 10:44:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:49:09.283 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:49:09.284 Cannot find device "nvmf_tgt_br" 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:49:09.284 Cannot find device "nvmf_tgt_br2" 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:49:09.284 Cannot find device "nvmf_tgt_br" 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:49:09.284 Cannot find device "nvmf_tgt_br2" 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:09.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:09.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:09.284 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:49:09.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:09.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:49:09.542 00:49:09.542 --- 10.0.0.2 ping statistics --- 00:49:09.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.542 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:49:09.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:09.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:49:09.542 00:49:09.542 --- 10.0.0.3 ping statistics --- 00:49:09.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.542 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:09.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:09.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:49:09.542 00:49:09.542 --- 10.0.0.1 ping statistics --- 00:49:09.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.542 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=79797 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 79797 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 79797 ']' 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:09.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:09.542 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:09.543 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:09.543 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:09.543 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:09.543 10:44:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:09.801 [2024-07-22 10:44:17.490217] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:49:09.801 [2024-07-22 10:44:17.490295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:09.801 [2024-07-22 10:44:17.609508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:49:09.801 [2024-07-22 10:44:17.634563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:09.801 [2024-07-22 10:44:17.677821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:09.801 [2024-07-22 10:44:17.678274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:09.801 [2024-07-22 10:44:17.678452] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:09.801 [2024-07-22 10:44:17.678673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:09.801 [2024-07-22 10:44:17.678885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:09.801 [2024-07-22 10:44:17.679189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:09.801 [2024-07-22 10:44:17.679366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:49:09.801 [2024-07-22 10:44:17.680084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:49:09.801 [2024-07-22 10:44:17.680086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:10.735 [2024-07-22 10:44:18.394671] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:10.735 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:49:10.736 [2024-07-22 10:44:18.466215] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:49:10.736 10:44:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:49:13.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:15.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:17.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:19.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:22.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:24.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:26.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:29.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:30.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:33.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:35.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:37.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:40.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:42.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:44.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:46.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:49.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:51.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:53.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:56.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:58.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:00.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:02.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:05.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:07.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:09.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:11.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:14.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:15.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:18.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:20.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:22.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:25.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:27.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:29.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:31.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:34.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:36.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:38.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:41.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:43.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:45.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:47.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:50.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:52.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:54.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:56.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:59.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:01.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:03.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:05.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:08.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:10.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:12.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:14.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:17.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:19.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:21.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:23.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:26.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:28.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:30.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:33.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:35.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:37.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:39.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:42.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:44.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:46.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:49.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:51.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:53.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:55.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:58.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:51:59.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:02.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:04.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:06.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:09.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:11.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:13.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:15.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:18.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:20.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:22.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:25.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:27.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:29.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:31.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:34.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:36.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:38.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:41.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:43.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:45.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:47.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:50.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:52.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:54.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:57.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:57.059 rmmod nvme_tcp 00:52:57.059 rmmod nvme_fabrics 00:52:57.059 rmmod nvme_keyring 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 79797 ']' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 79797 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 79797 ']' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 79797 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79797 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:52:57.059 killing process with pid 79797 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79797' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 79797 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 79797 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:52:57.059 00:52:57.059 real 3m48.043s 00:52:57.059 user 14m35.225s 00:52:57.059 sys 0m38.406s 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:57.059 10:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:52:57.059 ************************************ 00:52:57.059 END TEST nvmf_connect_disconnect 00:52:57.059 ************************************ 00:52:57.059 10:48:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:52:57.059 10:48:04 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:52:57.059 10:48:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:52:57.059 10:48:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:57.059 10:48:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:57.059 ************************************ 00:52:57.059 START TEST nvmf_multitarget 00:52:57.059 ************************************ 00:52:57.059 10:48:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:52:57.318 * Looking for test storage... 00:52:57.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:57.318 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:52:57.319 Cannot find device "nvmf_tgt_br" 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:52:57.319 Cannot find device "nvmf_tgt_br2" 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:52:57.319 Cannot find device "nvmf_tgt_br" 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:52:57.319 Cannot find device "nvmf_tgt_br2" 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:52:57.319 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:57.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:57.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:52:57.578 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:52:57.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:57.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:52:57.838 00:52:57.838 --- 10.0.0.2 ping statistics --- 00:52:57.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:57.838 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:52:57.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:52:57.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:52:57.838 00:52:57.838 --- 10.0.0.3 ping statistics --- 00:52:57.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:57.838 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:52:57.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:57.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:52:57.838 00:52:57.838 --- 10.0.0.1 ping statistics --- 00:52:57.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:57.838 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=83592 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 83592 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 83592 ']' 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:52:57.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:52:57.838 10:48:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:52:57.838 [2024-07-22 10:48:05.710818] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:52:57.838 [2024-07-22 10:48:05.710881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:58.097 [2024-07-22 10:48:05.829918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:52:58.097 [2024-07-22 10:48:05.854853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:52:58.097 [2024-07-22 10:48:05.896884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:58.097 [2024-07-22 10:48:05.896932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:58.097 [2024-07-22 10:48:05.896941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:58.097 [2024-07-22 10:48:05.896949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:58.097 [2024-07-22 10:48:05.896956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:58.097 [2024-07-22 10:48:05.897164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:52:58.097 [2024-07-22 10:48:05.897343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:52:58.097 [2024-07-22 10:48:05.898142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:52:58.097 [2024-07-22 10:48:05.898143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:52:58.663 10:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:52:58.663 10:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:52:58.663 10:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:52:58.663 10:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:52:58.663 10:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:52:58.921 "nvmf_tgt_1" 00:52:58.921 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:52:59.179 "nvmf_tgt_2" 00:52:59.179 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:52:59.179 10:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:52:59.179 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:52:59.179 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:52:59.437 true 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:52:59.437 true 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:59.437 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:59.696 rmmod nvme_tcp 00:52:59.696 rmmod nvme_fabrics 00:52:59.696 rmmod nvme_keyring 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 83592 ']' 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 83592 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 83592 ']' 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 83592 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83592 00:52:59.696 killing process with pid 83592 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83592' 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 83592 00:52:59.696 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 83592 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:52:59.955 ************************************ 00:52:59.955 END TEST nvmf_multitarget 00:52:59.955 ************************************ 00:52:59.955 00:52:59.955 real 0m2.784s 00:52:59.955 user 0m7.959s 00:52:59.955 sys 0m0.891s 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:59.955 10:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:52:59.955 10:48:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:52:59.955 10:48:07 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:52:59.955 10:48:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:52:59.955 10:48:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:59.955 10:48:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:59.955 ************************************ 00:52:59.955 START TEST nvmf_rpc 00:52:59.955 ************************************ 00:52:59.955 10:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:53:00.214 * Looking for test storage... 00:53:00.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:00.214 10:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:53:00.214 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:53:00.215 Cannot find device "nvmf_tgt_br" 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:53:00.215 Cannot find device "nvmf_tgt_br2" 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:53:00.215 Cannot find device "nvmf_tgt_br" 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:53:00.215 Cannot find device "nvmf_tgt_br2" 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:53:00.215 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:00.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:00.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:53:00.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:00.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:53:00.474 00:53:00.474 --- 10.0.0.2 ping statistics --- 00:53:00.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:00.474 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:53:00.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:00.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:53:00.474 00:53:00.474 --- 10.0.0.3 ping statistics --- 00:53:00.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:00.474 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:53:00.474 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:00.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:00.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:53:00.734 00:53:00.734 --- 10.0.0.1 ping statistics --- 00:53:00.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:00.734 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=83821 00:53:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 83821 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 83821 ']' 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:00.734 10:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:53:00.734 [2024-07-22 10:48:08.495901] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:53:00.734 [2024-07-22 10:48:08.495982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:00.734 [2024-07-22 10:48:08.618451] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:53:00.734 [2024-07-22 10:48:08.642105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:00.993 [2024-07-22 10:48:08.685825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:00.994 [2024-07-22 10:48:08.685877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:00.994 [2024-07-22 10:48:08.685887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:00.994 [2024-07-22 10:48:08.685895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:00.994 [2024-07-22 10:48:08.685902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:00.994 [2024-07-22 10:48:08.686105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:53:00.994 [2024-07-22 10:48:08.686340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:53:00.994 [2024-07-22 10:48:08.687024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:53:00.994 [2024-07-22 10:48:08.687025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:53:01.562 "poll_groups": [ 00:53:01.562 { 00:53:01.562 "admin_qpairs": 0, 00:53:01.562 "completed_nvme_io": 0, 00:53:01.562 "current_admin_qpairs": 0, 00:53:01.562 "current_io_qpairs": 0, 00:53:01.562 "io_qpairs": 0, 00:53:01.562 "name": "nvmf_tgt_poll_group_000", 00:53:01.562 "pending_bdev_io": 0, 00:53:01.562 "transports": [] 00:53:01.562 }, 00:53:01.562 { 00:53:01.562 "admin_qpairs": 0, 00:53:01.562 "completed_nvme_io": 0, 00:53:01.562 "current_admin_qpairs": 0, 00:53:01.562 "current_io_qpairs": 0, 00:53:01.562 "io_qpairs": 0, 00:53:01.562 "name": "nvmf_tgt_poll_group_001", 00:53:01.562 "pending_bdev_io": 0, 00:53:01.562 "transports": [] 00:53:01.562 }, 00:53:01.562 { 00:53:01.562 "admin_qpairs": 0, 00:53:01.562 "completed_nvme_io": 0, 00:53:01.562 "current_admin_qpairs": 0, 00:53:01.562 "current_io_qpairs": 0, 00:53:01.562 "io_qpairs": 0, 00:53:01.562 "name": "nvmf_tgt_poll_group_002", 00:53:01.562 "pending_bdev_io": 0, 00:53:01.562 "transports": [] 00:53:01.562 }, 00:53:01.562 { 00:53:01.562 "admin_qpairs": 0, 00:53:01.562 "completed_nvme_io": 0, 00:53:01.562 "current_admin_qpairs": 0, 00:53:01.562 "current_io_qpairs": 0, 00:53:01.562 "io_qpairs": 0, 00:53:01.562 "name": "nvmf_tgt_poll_group_003", 00:53:01.562 "pending_bdev_io": 0, 00:53:01.562 "transports": [] 00:53:01.562 } 00:53:01.562 ], 00:53:01.562 "tick_rate": 2490000000 00:53:01.562 }' 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.562 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.562 [2024-07-22 10:48:09.493199] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:53:01.822 "poll_groups": [ 00:53:01.822 { 00:53:01.822 "admin_qpairs": 0, 00:53:01.822 "completed_nvme_io": 0, 00:53:01.822 "current_admin_qpairs": 0, 00:53:01.822 "current_io_qpairs": 0, 00:53:01.822 "io_qpairs": 0, 00:53:01.822 "name": "nvmf_tgt_poll_group_000", 00:53:01.822 "pending_bdev_io": 0, 00:53:01.822 "transports": [ 00:53:01.822 { 00:53:01.822 "trtype": "TCP" 00:53:01.822 } 00:53:01.822 ] 00:53:01.822 }, 00:53:01.822 { 00:53:01.822 "admin_qpairs": 0, 00:53:01.822 "completed_nvme_io": 0, 00:53:01.822 "current_admin_qpairs": 0, 00:53:01.822 "current_io_qpairs": 0, 00:53:01.822 "io_qpairs": 0, 00:53:01.822 "name": "nvmf_tgt_poll_group_001", 00:53:01.822 "pending_bdev_io": 0, 00:53:01.822 "transports": [ 00:53:01.822 { 00:53:01.822 "trtype": "TCP" 00:53:01.822 } 00:53:01.822 ] 00:53:01.822 }, 00:53:01.822 { 00:53:01.822 "admin_qpairs": 0, 00:53:01.822 "completed_nvme_io": 0, 00:53:01.822 "current_admin_qpairs": 0, 00:53:01.822 "current_io_qpairs": 0, 00:53:01.822 "io_qpairs": 0, 00:53:01.822 "name": "nvmf_tgt_poll_group_002", 00:53:01.822 "pending_bdev_io": 0, 00:53:01.822 "transports": [ 00:53:01.822 { 00:53:01.822 "trtype": "TCP" 00:53:01.822 } 00:53:01.822 ] 00:53:01.822 }, 00:53:01.822 { 00:53:01.822 "admin_qpairs": 0, 00:53:01.822 "completed_nvme_io": 0, 00:53:01.822 "current_admin_qpairs": 0, 00:53:01.822 "current_io_qpairs": 0, 00:53:01.822 "io_qpairs": 0, 00:53:01.822 "name": "nvmf_tgt_poll_group_003", 00:53:01.822 "pending_bdev_io": 0, 00:53:01.822 "transports": [ 00:53:01.822 { 00:53:01.822 "trtype": "TCP" 00:53:01.822 } 00:53:01.822 ] 00:53:01.822 } 00:53:01.822 ], 00:53:01.822 "tick_rate": 2490000000 00:53:01.822 }' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.822 Malloc1 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.822 [2024-07-22 10:48:09.681111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.822 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -a 10.0.0.2 -s 4420 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -a 10.0.0.2 -s 4420 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -a 10.0.0.2 -s 4420 00:53:01.823 [2024-07-22 10:48:09.719374] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7' 00:53:01.823 Failed to write to /dev/nvme-fabrics: Input/output error 00:53:01.823 could not add new controller: failed to write to nvme-fabrics device 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:01.823 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:02.082 10:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:53:02.082 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:02.082 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:02.082 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:02.082 10:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:03.989 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:04.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:04.248 10:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:53:04.248 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:04.249 [2024-07-22 10:48:12.056602] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7' 00:53:04.249 Failed to write to /dev/nvme-fabrics: Input/output error 00:53:04.249 could not add new controller: failed to write to nvme-fabrics device 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:04.249 10:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:04.507 10:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:53:04.507 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:04.507 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:04.507 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:04.507 10:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:06.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:06.409 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:06.668 [2024-07-22 10:48:14.388500] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:06.668 10:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:09.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:09.197 [2024-07-22 10:48:16.712505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:09.197 10:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:11.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:11.139 10:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:11.140 [2024-07-22 10:48:19.028773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:11.140 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:11.398 10:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:53:11.398 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:11.398 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:11.398 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:11.398 10:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:13.302 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:13.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:13.572 [2024-07-22 10:48:21.376650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:13.572 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:13.831 10:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:53:13.831 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:13.831 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:13.831 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:13.831 10:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:15.735 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:15.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:15.993 [2024-07-22 10:48:23.808210] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:15.993 10:48:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:53:16.251 10:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:53:16.251 10:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:53:16.251 10:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:53:16.251 10:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:53:16.251 10:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:53:18.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:53:18.153 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 [2024-07-22 10:48:26.163515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 [2024-07-22 10:48:26.223463] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.412 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 [2024-07-22 10:48:26.279453] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.413 [2024-07-22 10:48:26.335445] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.413 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 [2024-07-22 10:48:26.395452] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:18.671 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:53:18.672 "poll_groups": [ 00:53:18.672 { 00:53:18.672 "admin_qpairs": 2, 00:53:18.672 "completed_nvme_io": 67, 00:53:18.672 "current_admin_qpairs": 0, 00:53:18.672 "current_io_qpairs": 0, 00:53:18.672 "io_qpairs": 16, 00:53:18.672 "name": "nvmf_tgt_poll_group_000", 00:53:18.672 "pending_bdev_io": 0, 00:53:18.672 "transports": [ 00:53:18.672 { 00:53:18.672 "trtype": "TCP" 00:53:18.672 } 00:53:18.672 ] 00:53:18.672 }, 00:53:18.672 { 00:53:18.672 "admin_qpairs": 3, 00:53:18.672 "completed_nvme_io": 117, 00:53:18.672 "current_admin_qpairs": 0, 00:53:18.672 "current_io_qpairs": 0, 00:53:18.672 "io_qpairs": 17, 00:53:18.672 "name": "nvmf_tgt_poll_group_001", 00:53:18.672 "pending_bdev_io": 0, 00:53:18.672 "transports": [ 00:53:18.672 { 00:53:18.672 "trtype": "TCP" 00:53:18.672 } 00:53:18.672 ] 00:53:18.672 }, 00:53:18.672 { 00:53:18.672 "admin_qpairs": 1, 00:53:18.672 "completed_nvme_io": 120, 00:53:18.672 "current_admin_qpairs": 0, 00:53:18.672 "current_io_qpairs": 0, 00:53:18.672 "io_qpairs": 19, 00:53:18.672 "name": "nvmf_tgt_poll_group_002", 00:53:18.672 "pending_bdev_io": 0, 00:53:18.672 "transports": [ 00:53:18.672 { 00:53:18.672 "trtype": "TCP" 00:53:18.672 } 00:53:18.672 ] 00:53:18.672 }, 00:53:18.672 { 00:53:18.672 "admin_qpairs": 1, 00:53:18.672 "completed_nvme_io": 116, 00:53:18.672 "current_admin_qpairs": 0, 00:53:18.672 "current_io_qpairs": 0, 00:53:18.672 "io_qpairs": 18, 00:53:18.672 "name": "nvmf_tgt_poll_group_003", 00:53:18.672 "pending_bdev_io": 0, 00:53:18.672 "transports": [ 00:53:18.672 { 00:53:18.672 "trtype": "TCP" 00:53:18.672 } 00:53:18.672 ] 00:53:18.672 } 00:53:18.672 ], 00:53:18.672 "tick_rate": 2490000000 00:53:18.672 }' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:53:18.672 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:53:18.672 rmmod nvme_tcp 00:53:18.930 rmmod nvme_fabrics 00:53:18.930 rmmod nvme_keyring 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 83821 ']' 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 83821 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 83821 ']' 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 83821 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83821 00:53:18.930 killing process with pid 83821 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83821' 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 83821 00:53:18.930 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 83821 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:53:19.188 00:53:19.188 real 0m19.099s 00:53:19.188 user 1m10.925s 00:53:19.188 sys 0m3.425s 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:53:19.188 ************************************ 00:53:19.188 END TEST nvmf_rpc 00:53:19.188 ************************************ 00:53:19.188 10:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:19.188 10:48:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:53:19.188 10:48:26 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:53:19.188 10:48:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:53:19.188 10:48:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:53:19.188 10:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:19.188 ************************************ 00:53:19.188 START TEST nvmf_invalid 00:53:19.188 ************************************ 00:53:19.188 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:53:19.188 * Looking for test storage... 00:53:19.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:19.447 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:53:19.448 Cannot find device "nvmf_tgt_br" 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:53:19.448 Cannot find device "nvmf_tgt_br2" 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:53:19.448 Cannot find device "nvmf_tgt_br" 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:53:19.448 Cannot find device "nvmf_tgt_br2" 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:19.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:19.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:19.448 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:53:19.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:19.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:53:19.707 00:53:19.707 --- 10.0.0.2 ping statistics --- 00:53:19.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:19.707 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:53:19.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:19.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:53:19.707 00:53:19.707 --- 10.0.0.3 ping statistics --- 00:53:19.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:19.707 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:19.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:19.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:53:19.707 00:53:19.707 --- 10.0.0.1 ping statistics --- 00:53:19.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:19.707 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:53:19.707 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=84340 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 84340 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 84340 ']' 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:19.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:53:19.966 10:48:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:53:19.966 [2024-07-22 10:48:27.694763] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:53:19.966 [2024-07-22 10:48:27.694835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:19.966 [2024-07-22 10:48:27.814362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:53:19.966 [2024-07-22 10:48:27.836813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:19.966 [2024-07-22 10:48:27.879560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:19.966 [2024-07-22 10:48:27.879853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:19.966 [2024-07-22 10:48:27.879947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:19.966 [2024-07-22 10:48:27.879992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:19.966 [2024-07-22 10:48:27.880017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:19.966 [2024-07-22 10:48:27.880219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:53:19.966 [2024-07-22 10:48:27.880404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:53:19.966 [2024-07-22 10:48:27.881330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:53:19.966 [2024-07-22 10:48:27.881330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9581 00:53:20.904 [2024-07-22 10:48:28.775694] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/22 10:48:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9581 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:53:20.904 request: 00:53:20.904 { 00:53:20.904 "method": "nvmf_create_subsystem", 00:53:20.904 "params": { 00:53:20.904 "nqn": "nqn.2016-06.io.spdk:cnode9581", 00:53:20.904 "tgt_name": "foobar" 00:53:20.904 } 00:53:20.904 } 00:53:20.904 Got JSON-RPC error response 00:53:20.904 GoRPCClient: error on JSON-RPC call' 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/22 10:48:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9581 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:53:20.904 request: 00:53:20.904 { 00:53:20.904 "method": "nvmf_create_subsystem", 00:53:20.904 "params": { 00:53:20.904 "nqn": "nqn.2016-06.io.spdk:cnode9581", 00:53:20.904 "tgt_name": "foobar" 00:53:20.904 } 00:53:20.904 } 00:53:20.904 Got JSON-RPC error response 00:53:20.904 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:53:20.904 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2135 00:53:21.166 [2024-07-22 10:48:28.979559] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2135: invalid serial number 'SPDKISFASTANDAWESOME' 00:53:21.166 10:48:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/22 10:48:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2135 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:53:21.166 request: 00:53:21.166 { 00:53:21.166 "method": "nvmf_create_subsystem", 00:53:21.166 "params": { 00:53:21.166 "nqn": "nqn.2016-06.io.spdk:cnode2135", 00:53:21.166 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:53:21.166 } 00:53:21.166 } 00:53:21.166 Got JSON-RPC error response 00:53:21.166 GoRPCClient: error on JSON-RPC call' 00:53:21.166 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/22 10:48:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2135 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:53:21.166 request: 00:53:21.166 { 00:53:21.166 "method": "nvmf_create_subsystem", 00:53:21.166 "params": { 00:53:21.166 "nqn": "nqn.2016-06.io.spdk:cnode2135", 00:53:21.166 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:53:21.166 } 00:53:21.166 } 00:53:21.166 Got JSON-RPC error response 00:53:21.166 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:53:21.166 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:53:21.166 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20741 00:53:21.425 [2024-07-22 10:48:29.179405] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20741: invalid model number 'SPDK_Controller' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/22 10:48:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode20741], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:53:21.425 request: 00:53:21.425 { 00:53:21.425 "method": "nvmf_create_subsystem", 00:53:21.425 "params": { 00:53:21.425 "nqn": "nqn.2016-06.io.spdk:cnode20741", 00:53:21.425 "model_number": "SPDK_Controller\u001f" 00:53:21.425 } 00:53:21.425 } 00:53:21.425 Got JSON-RPC error response 00:53:21.425 GoRPCClient: error on JSON-RPC call' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/22 10:48:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode20741], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:53:21.425 request: 00:53:21.425 { 00:53:21.425 "method": "nvmf_create_subsystem", 00:53:21.425 "params": { 00:53:21.425 "nqn": "nqn.2016-06.io.spdk:cnode20741", 00:53:21.425 "model_number": "SPDK_Controller\u001f" 00:53:21.425 } 00:53:21.425 } 00:53:21.425 Got JSON-RPC error response 00:53:21.425 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.425 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '.jc9s6i1rlxr#z5pUIU]:' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '.jc9s6i1rlxr#z5pUIU]:' nqn.2016-06.io.spdk:cnode2982 00:53:21.685 [2024-07-22 10:48:29.559360] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2982: invalid serial number '.jc9s6i1rlxr#z5pUIU]:' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/22 10:48:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2982 serial_number:.jc9s6i1rlxr#z5pUIU]:], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN .jc9s6i1rlxr#z5pUIU]: 00:53:21.685 request: 00:53:21.685 { 00:53:21.685 "method": "nvmf_create_subsystem", 00:53:21.685 "params": { 00:53:21.685 "nqn": "nqn.2016-06.io.spdk:cnode2982", 00:53:21.685 "serial_number": ".jc9s6i1rlxr#z5pUIU]:" 00:53:21.685 } 00:53:21.685 } 00:53:21.685 Got JSON-RPC error response 00:53:21.685 GoRPCClient: error on JSON-RPC call' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/22 10:48:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2982 serial_number:.jc9s6i1rlxr#z5pUIU]:], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN .jc9s6i1rlxr#z5pUIU]: 00:53:21.685 request: 00:53:21.685 { 00:53:21.685 "method": "nvmf_create_subsystem", 00:53:21.685 "params": { 00:53:21.685 "nqn": "nqn.2016-06.io.spdk:cnode2982", 00:53:21.685 "serial_number": ".jc9s6i1rlxr#z5pUIU]:" 00:53:21.685 } 00:53:21.685 } 00:53:21.685 Got JSON-RPC error response 00:53:21.685 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.685 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:53:21.945 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:53:21.946 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:21.947 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '[!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g' 00:53:22.206 10:48:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '[!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g' nqn.2016-06.io.spdk:cnode2332 00:53:22.206 [2024-07-22 10:48:30.103367] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2332: invalid model number '[!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g' 00:53:22.206 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/22 10:48:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:[!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g nqn:nqn.2016-06.io.spdk:cnode2332], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN [!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g 00:53:22.206 request: 00:53:22.206 { 00:53:22.206 "method": "nvmf_create_subsystem", 00:53:22.206 "params": { 00:53:22.206 "nqn": "nqn.2016-06.io.spdk:cnode2332", 00:53:22.206 "model_number": "[!sq9N/^doUzlAEQ@M\"a^yRejbI|x5iMga/L=>>#g" 00:53:22.206 } 00:53:22.206 } 00:53:22.206 Got JSON-RPC error response 00:53:22.206 GoRPCClient: error on JSON-RPC call' 00:53:22.206 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/22 10:48:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:[!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g nqn:nqn.2016-06.io.spdk:cnode2332], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN [!sq9N/^doUzlAEQ@M"a^yRejbI|x5iMga/L=>>#g 00:53:22.206 request: 00:53:22.206 { 00:53:22.206 "method": "nvmf_create_subsystem", 00:53:22.206 "params": { 00:53:22.206 "nqn": "nqn.2016-06.io.spdk:cnode2332", 00:53:22.206 "model_number": "[!sq9N/^doUzlAEQ@M\"a^yRejbI|x5iMga/L=>>#g" 00:53:22.206 } 00:53:22.206 } 00:53:22.206 Got JSON-RPC error response 00:53:22.206 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:53:22.206 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:53:22.465 [2024-07-22 10:48:30.303302] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:22.465 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:53:22.724 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:53:22.725 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:53:22.725 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:53:22.725 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:53:22.725 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:53:22.984 [2024-07-22 10:48:30.723391] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:53:22.984 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/22 10:48:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:53:22.984 request: 00:53:22.984 { 00:53:22.984 "method": "nvmf_subsystem_remove_listener", 00:53:22.984 "params": { 00:53:22.984 "nqn": "nqn.2016-06.io.spdk:cnode", 00:53:22.984 "listen_address": { 00:53:22.984 "trtype": "tcp", 00:53:22.984 "traddr": "", 00:53:22.984 "trsvcid": "4421" 00:53:22.984 } 00:53:22.984 } 00:53:22.984 } 00:53:22.984 Got JSON-RPC error response 00:53:22.984 GoRPCClient: error on JSON-RPC call' 00:53:22.984 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/22 10:48:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:53:22.984 request: 00:53:22.984 { 00:53:22.984 "method": "nvmf_subsystem_remove_listener", 00:53:22.984 "params": { 00:53:22.984 "nqn": "nqn.2016-06.io.spdk:cnode", 00:53:22.984 "listen_address": { 00:53:22.984 "trtype": "tcp", 00:53:22.984 "traddr": "", 00:53:22.984 "trsvcid": "4421" 00:53:22.984 } 00:53:22.984 } 00:53:22.984 } 00:53:22.984 Got JSON-RPC error response 00:53:22.984 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:53:22.984 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4446 -i 0 00:53:23.243 [2024-07-22 10:48:30.927346] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4446: invalid cntlid range [0-65519] 00:53:23.243 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/22 10:48:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4446], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:53:23.243 request: 00:53:23.243 { 00:53:23.243 "method": "nvmf_create_subsystem", 00:53:23.243 "params": { 00:53:23.243 "nqn": "nqn.2016-06.io.spdk:cnode4446", 00:53:23.243 "min_cntlid": 0 00:53:23.243 } 00:53:23.243 } 00:53:23.243 Got JSON-RPC error response 00:53:23.243 GoRPCClient: error on JSON-RPC call' 00:53:23.244 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/22 10:48:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4446], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:53:23.244 request: 00:53:23.244 { 00:53:23.244 "method": "nvmf_create_subsystem", 00:53:23.244 "params": { 00:53:23.244 "nqn": "nqn.2016-06.io.spdk:cnode4446", 00:53:23.244 "min_cntlid": 0 00:53:23.244 } 00:53:23.244 } 00:53:23.244 Got JSON-RPC error response 00:53:23.244 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:53:23.244 10:48:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16970 -i 65520 00:53:23.244 [2024-07-22 10:48:31.127335] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16970: invalid cntlid range [65520-65519] 00:53:23.244 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16970], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:53:23.244 request: 00:53:23.244 { 00:53:23.244 "method": "nvmf_create_subsystem", 00:53:23.244 "params": { 00:53:23.244 "nqn": "nqn.2016-06.io.spdk:cnode16970", 00:53:23.244 "min_cntlid": 65520 00:53:23.244 } 00:53:23.244 } 00:53:23.244 Got JSON-RPC error response 00:53:23.244 GoRPCClient: error on JSON-RPC call' 00:53:23.244 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16970], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:53:23.244 request: 00:53:23.244 { 00:53:23.244 "method": "nvmf_create_subsystem", 00:53:23.244 "params": { 00:53:23.244 "nqn": "nqn.2016-06.io.spdk:cnode16970", 00:53:23.244 "min_cntlid": 65520 00:53:23.244 } 00:53:23.244 } 00:53:23.244 Got JSON-RPC error response 00:53:23.244 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:53:23.244 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32610 -I 0 00:53:23.502 [2024-07-22 10:48:31.327343] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32610: invalid cntlid range [1-0] 00:53:23.502 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32610], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:53:23.502 request: 00:53:23.502 { 00:53:23.502 "method": "nvmf_create_subsystem", 00:53:23.502 "params": { 00:53:23.502 "nqn": "nqn.2016-06.io.spdk:cnode32610", 00:53:23.502 "max_cntlid": 0 00:53:23.502 } 00:53:23.502 } 00:53:23.502 Got JSON-RPC error response 00:53:23.502 GoRPCClient: error on JSON-RPC call' 00:53:23.502 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32610], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:53:23.502 request: 00:53:23.502 { 00:53:23.502 "method": "nvmf_create_subsystem", 00:53:23.502 "params": { 00:53:23.502 "nqn": "nqn.2016-06.io.spdk:cnode32610", 00:53:23.502 "max_cntlid": 0 00:53:23.502 } 00:53:23.502 } 00:53:23.502 Got JSON-RPC error response 00:53:23.502 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:53:23.502 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10553 -I 65520 00:53:23.761 [2024-07-22 10:48:31.539324] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10553: invalid cntlid range [1-65520] 00:53:23.761 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10553], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:53:23.761 request: 00:53:23.761 { 00:53:23.761 "method": "nvmf_create_subsystem", 00:53:23.761 "params": { 00:53:23.761 "nqn": "nqn.2016-06.io.spdk:cnode10553", 00:53:23.761 "max_cntlid": 65520 00:53:23.761 } 00:53:23.761 } 00:53:23.761 Got JSON-RPC error response 00:53:23.761 GoRPCClient: error on JSON-RPC call' 00:53:23.761 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10553], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:53:23.761 request: 00:53:23.761 { 00:53:23.761 "method": "nvmf_create_subsystem", 00:53:23.761 "params": { 00:53:23.761 "nqn": "nqn.2016-06.io.spdk:cnode10553", 00:53:23.761 "max_cntlid": 65520 00:53:23.761 } 00:53:23.761 } 00:53:23.761 Got JSON-RPC error response 00:53:23.761 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:53:23.761 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12021 -i 6 -I 5 00:53:24.019 [2024-07-22 10:48:31.739144] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12021: invalid cntlid range [6-5] 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode12021], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:53:24.019 request: 00:53:24.019 { 00:53:24.019 "method": "nvmf_create_subsystem", 00:53:24.019 "params": { 00:53:24.019 "nqn": "nqn.2016-06.io.spdk:cnode12021", 00:53:24.019 "min_cntlid": 6, 00:53:24.019 "max_cntlid": 5 00:53:24.019 } 00:53:24.019 } 00:53:24.019 Got JSON-RPC error response 00:53:24.019 GoRPCClient: error on JSON-RPC call' 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/22 10:48:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode12021], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:53:24.019 request: 00:53:24.019 { 00:53:24.019 "method": "nvmf_create_subsystem", 00:53:24.019 "params": { 00:53:24.019 "nqn": "nqn.2016-06.io.spdk:cnode12021", 00:53:24.019 "min_cntlid": 6, 00:53:24.019 "max_cntlid": 5 00:53:24.019 } 00:53:24.019 } 00:53:24.019 Got JSON-RPC error response 00:53:24.019 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:53:24.019 { 00:53:24.019 "name": "foobar", 00:53:24.019 "method": "nvmf_delete_target", 00:53:24.019 "req_id": 1 00:53:24.019 } 00:53:24.019 Got JSON-RPC error response 00:53:24.019 response: 00:53:24.019 { 00:53:24.019 "code": -32602, 00:53:24.019 "message": "The specified target doesn'\''t exist, cannot delete it." 00:53:24.019 }' 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:53:24.019 { 00:53:24.019 "name": "foobar", 00:53:24.019 "method": "nvmf_delete_target", 00:53:24.019 "req_id": 1 00:53:24.019 } 00:53:24.019 Got JSON-RPC error response 00:53:24.019 response: 00:53:24.019 { 00:53:24.019 "code": -32602, 00:53:24.019 "message": "The specified target doesn't exist, cannot delete it." 00:53:24.019 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:53:24.019 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:53:24.019 rmmod nvme_tcp 00:53:24.019 rmmod nvme_fabrics 00:53:24.277 rmmod nvme_keyring 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 84340 ']' 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 84340 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 84340 ']' 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 84340 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:53:24.277 10:48:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84340 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:53:24.277 killing process with pid 84340 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84340' 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 84340 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 84340 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:53:24.277 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:53:24.278 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:24.278 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:24.278 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:24.536 10:48:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:53:24.536 00:53:24.536 real 0m5.242s 00:53:24.536 user 0m19.594s 00:53:24.536 sys 0m1.606s 00:53:24.536 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:53:24.536 10:48:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:53:24.536 ************************************ 00:53:24.536 END TEST nvmf_invalid 00:53:24.536 ************************************ 00:53:24.536 10:48:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:53:24.536 10:48:32 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:53:24.536 10:48:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:53:24.536 10:48:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:53:24.536 10:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:24.536 ************************************ 00:53:24.536 START TEST nvmf_abort 00:53:24.536 ************************************ 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:53:24.536 * Looking for test storage... 00:53:24.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:24.536 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:24.795 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:53:24.796 Cannot find device "nvmf_tgt_br" 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:53:24.796 Cannot find device "nvmf_tgt_br2" 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:53:24.796 Cannot find device "nvmf_tgt_br" 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:53:24.796 Cannot find device "nvmf_tgt_br2" 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:24.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:24.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:24.796 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:53:25.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:25.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:53:25.055 00:53:25.055 --- 10.0.0.2 ping statistics --- 00:53:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:25.055 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:53:25.055 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:25.055 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:53:25.055 00:53:25.055 --- 10.0.0.3 ping statistics --- 00:53:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:25.055 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:25.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:25.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:53:25.055 00:53:25.055 --- 10.0.0.1 ping statistics --- 00:53:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:25.055 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=84840 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 84840 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 84840 ']' 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:53:25.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:53:25.055 10:48:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:25.314 [2024-07-22 10:48:33.030071] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:53:25.314 [2024-07-22 10:48:33.030140] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:25.314 [2024-07-22 10:48:33.148831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:53:25.314 [2024-07-22 10:48:33.168425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:25.314 [2024-07-22 10:48:33.209311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:25.314 [2024-07-22 10:48:33.209578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:25.314 [2024-07-22 10:48:33.209664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:25.314 [2024-07-22 10:48:33.209709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:25.314 [2024-07-22 10:48:33.209734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:25.314 [2024-07-22 10:48:33.209951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:53:25.314 [2024-07-22 10:48:33.210650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:53:25.314 [2024-07-22 10:48:33.210651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.246 [2024-07-22 10:48:33.932219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.246 Malloc0 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.246 10:48:33 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.247 Delay0 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.247 10:48:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.247 [2024-07-22 10:48:33.999611] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:26.247 10:48:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.247 10:48:34 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:53:26.247 10:48:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:26.247 10:48:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:26.247 10:48:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:26.247 10:48:34 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:53:26.505 [2024-07-22 10:48:34.194882] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:53:28.406 Initializing NVMe Controllers 00:53:28.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:53:28.406 controller IO queue size 128 less than required 00:53:28.406 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:53:28.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:53:28.406 Initialization complete. Launching workers. 00:53:28.406 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43223 00:53:28.406 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43284, failed to submit 62 00:53:28.406 success 43227, unsuccess 57, failed 0 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:53:28.407 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:53:28.407 rmmod nvme_tcp 00:53:28.407 rmmod nvme_fabrics 00:53:28.407 rmmod nvme_keyring 00:53:28.665 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:53:28.665 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:53:28.665 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:53:28.665 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 84840 ']' 00:53:28.665 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 84840 00:53:28.665 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 84840 ']' 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 84840 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84840 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:53:28.666 killing process with pid 84840 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84840' 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 84840 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 84840 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:28.666 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:28.926 10:48:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:53:28.926 00:53:28.926 real 0m4.321s 00:53:28.926 user 0m11.814s 00:53:28.926 sys 0m1.303s 00:53:28.926 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:53:28.926 10:48:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:28.926 ************************************ 00:53:28.926 END TEST nvmf_abort 00:53:28.926 ************************************ 00:53:28.926 10:48:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:53:28.926 10:48:36 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:53:28.926 10:48:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:53:28.926 10:48:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:53:28.926 10:48:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:28.926 ************************************ 00:53:28.926 START TEST nvmf_ns_hotplug_stress 00:53:28.926 ************************************ 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:53:28.926 * Looking for test storage... 00:53:28.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:28.926 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:29.184 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:29.184 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:53:29.184 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:29.184 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:29.184 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:29.184 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:53:29.185 Cannot find device "nvmf_tgt_br" 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:53:29.185 Cannot find device "nvmf_tgt_br2" 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:53:29.185 Cannot find device "nvmf_tgt_br" 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:53:29.185 Cannot find device "nvmf_tgt_br2" 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:53:29.185 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:29.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:29.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:29.185 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:53:29.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:29.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:53:29.444 00:53:29.444 --- 10.0.0.2 ping statistics --- 00:53:29.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:29.444 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:53:29.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:29.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:53:29.444 00:53:29.444 --- 10.0.0.3 ping statistics --- 00:53:29.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:29.444 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:29.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:29.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:53:29.444 00:53:29.444 --- 10.0.0.1 ping statistics --- 00:53:29.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:29.444 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=85104 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 85104 00:53:29.444 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 85104 ']' 00:53:29.445 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:29.445 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:53:29.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:29.445 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:29.445 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:53:29.445 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:29.445 [2024-07-22 10:48:37.333903] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:53:29.445 [2024-07-22 10:48:37.333996] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:29.702 [2024-07-22 10:48:37.453019] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:53:29.702 [2024-07-22 10:48:37.477944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:29.702 [2024-07-22 10:48:37.520085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:29.702 [2024-07-22 10:48:37.520133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:29.702 [2024-07-22 10:48:37.520159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:29.702 [2024-07-22 10:48:37.520167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:29.702 [2024-07-22 10:48:37.520174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:29.702 [2024-07-22 10:48:37.520378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:53:29.702 [2024-07-22 10:48:37.521355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:53:29.703 [2024-07-22 10:48:37.521356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:53:30.269 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:53:30.269 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:53:30.269 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:53:30.269 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:53:30.269 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:30.527 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:30.527 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:53:30.527 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:53:30.527 [2024-07-22 10:48:38.398993] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:30.527 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:53:30.785 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:31.043 [2024-07-22 10:48:38.779669] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:31.043 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:53:31.301 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:53:31.301 Malloc0 00:53:31.301 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:53:31.559 Delay0 00:53:31.559 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:31.817 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:53:32.130 NULL1 00:53:32.130 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:53:32.130 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:53:32.130 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=85229 00:53:32.130 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:32.130 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:32.390 Read completed with error (sct=0, sc=11) 00:53:32.390 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:32.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:32.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:32.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:32.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:32.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:32.647 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:53:32.647 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:53:32.647 true 00:53:32.647 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:32.647 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:33.581 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:33.840 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:53:33.840 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:53:33.840 true 00:53:33.840 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:33.840 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:34.098 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:34.358 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:53:34.358 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:53:34.617 true 00:53:34.617 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:34.617 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:35.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:35.557 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:35.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:35.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:35.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:35.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:35.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:35.815 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:53:35.815 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:53:36.073 true 00:53:36.073 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:36.073 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:37.010 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:37.010 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:53:37.010 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:53:37.268 true 00:53:37.268 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:37.268 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:37.526 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:37.526 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:53:37.526 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:53:37.784 true 00:53:37.784 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:37.784 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:38.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:38.721 10:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:38.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:38.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:38.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:38.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:38.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:38.980 10:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:53:38.980 10:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:53:39.237 true 00:53:39.237 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:39.237 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:40.174 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:40.174 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:53:40.174 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:53:40.432 true 00:53:40.432 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:40.432 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:40.690 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:40.690 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:53:40.690 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:53:40.948 true 00:53:40.948 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:40.948 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:42.320 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:53:42.320 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:53:42.320 true 00:53:42.578 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:42.578 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:43.512 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:43.512 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:53:43.512 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:53:43.770 true 00:53:43.770 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:43.770 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:43.770 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:44.028 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:53:44.028 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:53:44.286 true 00:53:44.286 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:44.286 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:45.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:45.222 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:45.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:45.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:45.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:45.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:45.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:45.480 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:53:45.480 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:53:45.738 true 00:53:45.739 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:45.739 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:46.674 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:46.674 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:53:46.674 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:53:46.931 true 00:53:46.931 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:46.931 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:47.190 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:47.190 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:53:47.190 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:53:47.448 true 00:53:47.449 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:47.449 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:48.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:48.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:48.642 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:53:48.642 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:53:48.900 true 00:53:48.900 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:48.900 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:49.835 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:49.835 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:53:49.835 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:53:50.093 true 00:53:50.093 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:50.093 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:50.350 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:50.608 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:53:50.608 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:53:50.608 true 00:53:50.608 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:50.608 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:52.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:52.079 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:52.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:52.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:52.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:52.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:52.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:52.079 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:53:52.079 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:53:52.079 true 00:53:52.079 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:52.079 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:53.015 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:53.273 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:53:53.273 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:53:53.273 true 00:53:53.273 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:53.273 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:53.532 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:53.791 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:53:53.791 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:53:53.791 true 00:53:54.050 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:54.050 10:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:54.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:54.988 10:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:54.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:54.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:54.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:54.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:55.247 10:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:53:55.247 10:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:53:55.247 true 00:53:55.505 10:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:55.505 10:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:56.439 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:56.439 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:53:56.439 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:53:56.695 true 00:53:56.695 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:56.695 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:56.695 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:56.953 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:53:56.953 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:53:57.211 true 00:53:57.211 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:57.211 10:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:58.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:58.146 10:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:58.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:58.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:58.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:58.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:58.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:58.404 10:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:53:58.404 10:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:53:58.662 true 00:53:58.662 10:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:58.662 10:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:59.737 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:59.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:59.737 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:53:59.737 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:53:59.737 true 00:53:59.737 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:53:59.737 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:59.997 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:00.266 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:54:00.266 10:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:54:00.266 true 00:54:00.266 10:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:54:00.266 10:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:01.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:01.640 10:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:01.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:01.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:01.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:01.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:01.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:01.640 10:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:54:01.640 10:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:54:01.898 true 00:54:01.898 10:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:54:01.898 10:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:02.835 Initializing NVMe Controllers 00:54:02.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:02.835 Controller IO queue size 128, less than required. 00:54:02.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:02.835 Controller IO queue size 128, less than required. 00:54:02.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:02.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:02.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:54:02.835 Initialization complete. Launching workers. 00:54:02.835 ======================================================== 00:54:02.835 Latency(us) 00:54:02.835 Device Information : IOPS MiB/s Average min max 00:54:02.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2167.20 1.06 42713.11 1793.55 1013582.73 00:54:02.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18731.63 9.15 6833.43 2839.27 439689.55 00:54:02.835 ======================================================== 00:54:02.835 Total : 20898.83 10.20 10554.13 1793.55 1013582.73 00:54:02.835 00:54:02.835 10:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:02.835 10:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:54:02.835 10:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:54:03.094 true 00:54:03.094 10:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85229 00:54:03.094 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (85229) - No such process 00:54:03.094 10:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 85229 00:54:03.094 10:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:03.353 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:03.353 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:54:03.353 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:54:03.353 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:54:03.353 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:03.353 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:54:03.612 null0 00:54:03.612 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:03.612 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:03.612 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:54:03.871 null1 00:54:03.871 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:03.871 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:03.871 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:54:04.130 null2 00:54:04.130 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:04.130 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:04.130 10:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:54:04.130 null3 00:54:04.130 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:04.130 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:04.130 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:54:04.389 null4 00:54:04.389 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:04.389 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:04.389 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:54:04.647 null5 00:54:04.647 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:04.647 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:04.647 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:54:04.647 null6 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:54:04.907 null7 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 86252 86254 86255 86257 86259 86261 86264 86266 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:04.907 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:05.166 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:05.166 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:05.166 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:05.166 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:05.166 10:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:05.166 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:05.166 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:05.166 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:05.425 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.683 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:05.941 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.941 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.941 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:05.941 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:05.942 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.200 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:06.458 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:06.459 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:06.718 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:06.976 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:06.977 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.236 10:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:07.236 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.495 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:07.755 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:08.014 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:08.273 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:08.273 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:08.273 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:08.273 10:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.273 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:08.274 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.274 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.274 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:08.274 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.274 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.274 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:08.533 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:54:08.792 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.793 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.793 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:54:08.793 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:54:08.793 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:08.793 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:08.793 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.052 10:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:09.313 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:09.313 rmmod nvme_tcp 00:54:09.313 rmmod nvme_fabrics 00:54:09.573 rmmod nvme_keyring 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 85104 ']' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 85104 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 85104 ']' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 85104 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85104 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:54:09.573 killing process with pid 85104 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85104' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 85104 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 85104 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:09.573 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:09.832 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:09.832 00:54:09.832 real 0m40.834s 00:54:09.832 user 3m4.527s 00:54:09.832 sys 0m15.201s 00:54:09.832 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:54:09.832 10:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:54:09.832 ************************************ 00:54:09.832 END TEST nvmf_ns_hotplug_stress 00:54:09.832 ************************************ 00:54:09.832 10:49:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:54:09.832 10:49:17 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:54:09.832 10:49:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:54:09.832 10:49:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:54:09.832 10:49:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:09.832 ************************************ 00:54:09.832 START TEST nvmf_connect_stress 00:54:09.832 ************************************ 00:54:09.832 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:54:09.832 * Looking for test storage... 00:54:09.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:09.832 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:09.832 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:10.093 Cannot find device "nvmf_tgt_br" 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:10.093 Cannot find device "nvmf_tgt_br2" 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:10.093 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:10.093 Cannot find device "nvmf_tgt_br" 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:10.094 Cannot find device "nvmf_tgt_br2" 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:10.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:10.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:10.094 10:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:10.094 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:10.094 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:10.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:10.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:54:10.354 00:54:10.354 --- 10.0.0.2 ping statistics --- 00:54:10.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.354 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:10.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:10.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:54:10.354 00:54:10.354 --- 10.0.0.3 ping statistics --- 00:54:10.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.354 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:10.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:10.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:54:10.354 00:54:10.354 --- 10.0.0.1 ping statistics --- 00:54:10.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.354 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:10.354 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=87613 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 87613 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 87613 ']' 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:10.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:10.355 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:10.615 [2024-07-22 10:49:18.288196] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:10.615 [2024-07-22 10:49:18.288276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:10.615 [2024-07-22 10:49:18.406976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:10.615 [2024-07-22 10:49:18.432502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:54:10.615 [2024-07-22 10:49:18.473343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:10.615 [2024-07-22 10:49:18.473395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:10.615 [2024-07-22 10:49:18.473405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:10.615 [2024-07-22 10:49:18.473413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:10.615 [2024-07-22 10:49:18.473420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:10.615 [2024-07-22 10:49:18.473640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:54:10.615 [2024-07-22 10:49:18.474530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:54:10.615 [2024-07-22 10:49:18.474532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:54:11.184 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:11.184 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:54:11.184 10:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:11.184 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:11.184 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:11.444 [2024-07-22 10:49:19.179813] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:11.444 [2024-07-22 10:49:19.196711] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:11.444 NULL1 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=87665 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:54:11.444 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:11.445 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:12.014 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:12.014 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:12.014 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:12.014 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:12.014 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:12.273 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:12.273 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:12.273 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:12.273 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:12.273 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:12.532 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:12.532 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:12.532 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:12.532 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:12.532 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:12.792 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:12.792 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:12.792 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:12.792 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:12.792 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:13.052 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:13.052 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:13.052 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:13.052 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:13.052 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:13.621 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:13.621 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:13.621 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:13.621 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:13.621 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:13.880 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:13.880 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:13.880 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:13.880 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:13.880 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:14.139 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:14.139 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:14.139 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:14.139 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:14.139 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:14.400 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:14.401 10:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:14.401 10:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:14.401 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:14.401 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:14.659 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:14.659 10:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:14.659 10:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:14.659 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:14.659 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:15.226 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:15.226 10:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:15.226 10:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:15.226 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:15.226 10:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:15.484 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:15.484 10:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:15.484 10:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:15.484 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:15.484 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:15.741 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:15.741 10:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:15.741 10:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:15.741 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:15.741 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:16.000 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:16.000 10:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:16.000 10:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:16.000 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:16.000 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:16.567 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:16.567 10:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:16.567 10:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:16.567 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:16.567 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:16.825 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:16.825 10:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:16.825 10:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:16.825 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:16.825 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:17.083 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:17.083 10:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:17.083 10:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:17.083 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:17.083 10:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:17.341 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:17.341 10:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:17.341 10:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:17.341 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:17.341 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:17.600 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:17.600 10:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:17.600 10:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:17.600 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:17.600 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:18.168 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.168 10:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:18.168 10:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:18.168 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.168 10:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:18.427 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.427 10:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:18.427 10:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:18.427 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.427 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:18.686 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.686 10:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:18.686 10:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:18.686 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.686 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:18.944 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.944 10:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:18.944 10:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:18.944 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.944 10:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:19.203 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:19.203 10:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:19.203 10:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:19.203 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:19.203 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:19.769 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:19.769 10:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:19.769 10:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:19.769 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:19.769 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:20.028 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:20.028 10:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:20.029 10:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:20.029 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:20.029 10:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:20.288 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:20.288 10:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:20.288 10:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:20.288 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:20.288 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:20.547 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:20.547 10:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:20.547 10:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:20.547 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:20.547 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:21.115 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:21.115 10:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:21.115 10:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:21.115 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:21.115 10:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:21.374 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:21.374 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:21.374 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:21.374 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:21.374 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:21.634 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:21.634 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:21.634 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:54:21.634 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:21.634 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:21.634 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87665 00:54:21.894 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (87665) - No such process 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 87665 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:21.894 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:21.894 rmmod nvme_tcp 00:54:21.894 rmmod nvme_fabrics 00:54:21.894 rmmod nvme_keyring 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 87613 ']' 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 87613 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 87613 ']' 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 87613 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87613 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:54:22.153 killing process with pid 87613 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87613' 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 87613 00:54:22.153 10:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 87613 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:22.153 10:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:22.412 10:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:22.412 00:54:22.412 real 0m12.476s 00:54:22.412 user 0m40.683s 00:54:22.412 sys 0m4.208s 00:54:22.412 ************************************ 00:54:22.412 END TEST nvmf_connect_stress 00:54:22.412 ************************************ 00:54:22.412 10:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:54:22.412 10:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:54:22.412 10:49:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:54:22.413 10:49:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:54:22.413 10:49:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:54:22.413 10:49:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:54:22.413 10:49:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:22.413 ************************************ 00:54:22.413 START TEST nvmf_fused_ordering 00:54:22.413 ************************************ 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:54:22.413 * Looking for test storage... 00:54:22.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:22.413 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:22.673 Cannot find device "nvmf_tgt_br" 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:22.673 Cannot find device "nvmf_tgt_br2" 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:22.673 Cannot find device "nvmf_tgt_br" 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:22.673 Cannot find device "nvmf_tgt_br2" 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:22.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:22.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:22.673 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:22.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:22.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:54:22.933 00:54:22.933 --- 10.0.0.2 ping statistics --- 00:54:22.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:22.933 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:22.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:22.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:54:22.933 00:54:22.933 --- 10.0.0.3 ping statistics --- 00:54:22.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:22.933 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:22.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:22.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:54:22.933 00:54:22.933 --- 10.0.0.1 ping statistics --- 00:54:22.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:22.933 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=87990 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 87990 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 87990 ']' 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:22.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:22.933 10:49:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:22.933 [2024-07-22 10:49:30.807089] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:22.933 [2024-07-22 10:49:30.807164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:23.192 [2024-07-22 10:49:30.926559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:23.192 [2024-07-22 10:49:30.950611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:23.192 [2024-07-22 10:49:30.995070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:23.192 [2024-07-22 10:49:30.995120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:23.192 [2024-07-22 10:49:30.995130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:23.192 [2024-07-22 10:49:30.995137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:23.192 [2024-07-22 10:49:30.995144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:23.192 [2024-07-22 10:49:30.995166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:54:23.760 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:23.760 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:54:23.760 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:23.760 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:23.760 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 [2024-07-22 10:49:31.719393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 [2024-07-22 10:49:31.743558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 NULL1 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:24.018 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:54:24.018 [2024-07-22 10:49:31.815865] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:24.018 [2024-07-22 10:49:31.815917] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88040 ] 00:54:24.018 [2024-07-22 10:49:31.935490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:24.278 Attached to nqn.2016-06.io.spdk:cnode1 00:54:24.278 Namespace ID: 1 size: 1GB 00:54:24.278 fused_ordering(0) 00:54:24.278 fused_ordering(1) 00:54:24.278 fused_ordering(2) 00:54:24.278 fused_ordering(3) 00:54:24.278 fused_ordering(4) 00:54:24.278 fused_ordering(5) 00:54:24.278 fused_ordering(6) 00:54:24.278 fused_ordering(7) 00:54:24.278 fused_ordering(8) 00:54:24.278 fused_ordering(9) 00:54:24.278 fused_ordering(10) 00:54:24.278 fused_ordering(11) 00:54:24.278 fused_ordering(12) 00:54:24.278 fused_ordering(13) 00:54:24.278 fused_ordering(14) 00:54:24.278 fused_ordering(15) 00:54:24.278 fused_ordering(16) 00:54:24.278 fused_ordering(17) 00:54:24.278 fused_ordering(18) 00:54:24.278 fused_ordering(19) 00:54:24.278 fused_ordering(20) 00:54:24.278 fused_ordering(21) 00:54:24.278 fused_ordering(22) 00:54:24.278 fused_ordering(23) 00:54:24.278 fused_ordering(24) 00:54:24.278 fused_ordering(25) 00:54:24.278 fused_ordering(26) 00:54:24.278 fused_ordering(27) 00:54:24.278 fused_ordering(28) 00:54:24.278 fused_ordering(29) 00:54:24.278 fused_ordering(30) 00:54:24.278 fused_ordering(31) 00:54:24.278 fused_ordering(32) 00:54:24.278 fused_ordering(33) 00:54:24.278 fused_ordering(34) 00:54:24.278 fused_ordering(35) 00:54:24.278 fused_ordering(36) 00:54:24.278 fused_ordering(37) 00:54:24.278 fused_ordering(38) 00:54:24.278 fused_ordering(39) 00:54:24.278 fused_ordering(40) 00:54:24.278 fused_ordering(41) 00:54:24.278 fused_ordering(42) 00:54:24.278 fused_ordering(43) 00:54:24.278 fused_ordering(44) 00:54:24.278 fused_ordering(45) 00:54:24.278 fused_ordering(46) 00:54:24.278 fused_ordering(47) 00:54:24.278 fused_ordering(48) 00:54:24.278 fused_ordering(49) 00:54:24.278 fused_ordering(50) 00:54:24.278 fused_ordering(51) 00:54:24.278 fused_ordering(52) 00:54:24.278 fused_ordering(53) 00:54:24.278 fused_ordering(54) 00:54:24.278 fused_ordering(55) 00:54:24.278 fused_ordering(56) 00:54:24.278 fused_ordering(57) 00:54:24.278 fused_ordering(58) 00:54:24.278 fused_ordering(59) 00:54:24.278 fused_ordering(60) 00:54:24.278 fused_ordering(61) 00:54:24.278 fused_ordering(62) 00:54:24.278 fused_ordering(63) 00:54:24.278 fused_ordering(64) 00:54:24.278 fused_ordering(65) 00:54:24.278 fused_ordering(66) 00:54:24.278 fused_ordering(67) 00:54:24.278 fused_ordering(68) 00:54:24.278 fused_ordering(69) 00:54:24.278 fused_ordering(70) 00:54:24.278 fused_ordering(71) 00:54:24.278 fused_ordering(72) 00:54:24.278 fused_ordering(73) 00:54:24.278 fused_ordering(74) 00:54:24.278 fused_ordering(75) 00:54:24.278 fused_ordering(76) 00:54:24.278 fused_ordering(77) 00:54:24.278 fused_ordering(78) 00:54:24.278 fused_ordering(79) 00:54:24.278 fused_ordering(80) 00:54:24.278 fused_ordering(81) 00:54:24.278 fused_ordering(82) 00:54:24.278 fused_ordering(83) 00:54:24.278 fused_ordering(84) 00:54:24.278 fused_ordering(85) 00:54:24.278 fused_ordering(86) 00:54:24.278 fused_ordering(87) 00:54:24.278 fused_ordering(88) 00:54:24.278 fused_ordering(89) 00:54:24.278 fused_ordering(90) 00:54:24.278 fused_ordering(91) 00:54:24.278 fused_ordering(92) 00:54:24.278 fused_ordering(93) 00:54:24.278 fused_ordering(94) 00:54:24.278 fused_ordering(95) 00:54:24.278 fused_ordering(96) 00:54:24.278 fused_ordering(97) 00:54:24.278 fused_ordering(98) 00:54:24.278 fused_ordering(99) 00:54:24.278 fused_ordering(100) 00:54:24.278 fused_ordering(101) 00:54:24.278 fused_ordering(102) 00:54:24.278 fused_ordering(103) 00:54:24.278 fused_ordering(104) 00:54:24.278 fused_ordering(105) 00:54:24.278 fused_ordering(106) 00:54:24.278 fused_ordering(107) 00:54:24.278 fused_ordering(108) 00:54:24.278 fused_ordering(109) 00:54:24.278 fused_ordering(110) 00:54:24.278 fused_ordering(111) 00:54:24.278 fused_ordering(112) 00:54:24.278 fused_ordering(113) 00:54:24.278 fused_ordering(114) 00:54:24.278 fused_ordering(115) 00:54:24.278 fused_ordering(116) 00:54:24.278 fused_ordering(117) 00:54:24.278 fused_ordering(118) 00:54:24.278 fused_ordering(119) 00:54:24.278 fused_ordering(120) 00:54:24.278 fused_ordering(121) 00:54:24.278 fused_ordering(122) 00:54:24.278 fused_ordering(123) 00:54:24.278 fused_ordering(124) 00:54:24.278 fused_ordering(125) 00:54:24.278 fused_ordering(126) 00:54:24.278 fused_ordering(127) 00:54:24.278 fused_ordering(128) 00:54:24.278 fused_ordering(129) 00:54:24.278 fused_ordering(130) 00:54:24.278 fused_ordering(131) 00:54:24.278 fused_ordering(132) 00:54:24.278 fused_ordering(133) 00:54:24.278 fused_ordering(134) 00:54:24.278 fused_ordering(135) 00:54:24.278 fused_ordering(136) 00:54:24.278 fused_ordering(137) 00:54:24.278 fused_ordering(138) 00:54:24.278 fused_ordering(139) 00:54:24.278 fused_ordering(140) 00:54:24.278 fused_ordering(141) 00:54:24.278 fused_ordering(142) 00:54:24.278 fused_ordering(143) 00:54:24.278 fused_ordering(144) 00:54:24.278 fused_ordering(145) 00:54:24.278 fused_ordering(146) 00:54:24.278 fused_ordering(147) 00:54:24.278 fused_ordering(148) 00:54:24.278 fused_ordering(149) 00:54:24.278 fused_ordering(150) 00:54:24.278 fused_ordering(151) 00:54:24.278 fused_ordering(152) 00:54:24.278 fused_ordering(153) 00:54:24.278 fused_ordering(154) 00:54:24.278 fused_ordering(155) 00:54:24.278 fused_ordering(156) 00:54:24.278 fused_ordering(157) 00:54:24.278 fused_ordering(158) 00:54:24.278 fused_ordering(159) 00:54:24.278 fused_ordering(160) 00:54:24.278 fused_ordering(161) 00:54:24.278 fused_ordering(162) 00:54:24.278 fused_ordering(163) 00:54:24.278 fused_ordering(164) 00:54:24.278 fused_ordering(165) 00:54:24.278 fused_ordering(166) 00:54:24.278 fused_ordering(167) 00:54:24.278 fused_ordering(168) 00:54:24.278 fused_ordering(169) 00:54:24.278 fused_ordering(170) 00:54:24.278 fused_ordering(171) 00:54:24.278 fused_ordering(172) 00:54:24.278 fused_ordering(173) 00:54:24.278 fused_ordering(174) 00:54:24.278 fused_ordering(175) 00:54:24.278 fused_ordering(176) 00:54:24.278 fused_ordering(177) 00:54:24.278 fused_ordering(178) 00:54:24.278 fused_ordering(179) 00:54:24.278 fused_ordering(180) 00:54:24.278 fused_ordering(181) 00:54:24.278 fused_ordering(182) 00:54:24.278 fused_ordering(183) 00:54:24.278 fused_ordering(184) 00:54:24.278 fused_ordering(185) 00:54:24.278 fused_ordering(186) 00:54:24.279 fused_ordering(187) 00:54:24.279 fused_ordering(188) 00:54:24.279 fused_ordering(189) 00:54:24.279 fused_ordering(190) 00:54:24.279 fused_ordering(191) 00:54:24.279 fused_ordering(192) 00:54:24.279 fused_ordering(193) 00:54:24.279 fused_ordering(194) 00:54:24.279 fused_ordering(195) 00:54:24.279 fused_ordering(196) 00:54:24.279 fused_ordering(197) 00:54:24.279 fused_ordering(198) 00:54:24.279 fused_ordering(199) 00:54:24.279 fused_ordering(200) 00:54:24.279 fused_ordering(201) 00:54:24.279 fused_ordering(202) 00:54:24.279 fused_ordering(203) 00:54:24.279 fused_ordering(204) 00:54:24.279 fused_ordering(205) 00:54:24.537 fused_ordering(206) 00:54:24.537 fused_ordering(207) 00:54:24.537 fused_ordering(208) 00:54:24.537 fused_ordering(209) 00:54:24.537 fused_ordering(210) 00:54:24.537 fused_ordering(211) 00:54:24.537 fused_ordering(212) 00:54:24.537 fused_ordering(213) 00:54:24.537 fused_ordering(214) 00:54:24.537 fused_ordering(215) 00:54:24.537 fused_ordering(216) 00:54:24.537 fused_ordering(217) 00:54:24.537 fused_ordering(218) 00:54:24.537 fused_ordering(219) 00:54:24.537 fused_ordering(220) 00:54:24.537 fused_ordering(221) 00:54:24.537 fused_ordering(222) 00:54:24.537 fused_ordering(223) 00:54:24.537 fused_ordering(224) 00:54:24.537 fused_ordering(225) 00:54:24.537 fused_ordering(226) 00:54:24.537 fused_ordering(227) 00:54:24.537 fused_ordering(228) 00:54:24.537 fused_ordering(229) 00:54:24.537 fused_ordering(230) 00:54:24.537 fused_ordering(231) 00:54:24.537 fused_ordering(232) 00:54:24.537 fused_ordering(233) 00:54:24.537 fused_ordering(234) 00:54:24.537 fused_ordering(235) 00:54:24.537 fused_ordering(236) 00:54:24.537 fused_ordering(237) 00:54:24.537 fused_ordering(238) 00:54:24.537 fused_ordering(239) 00:54:24.537 fused_ordering(240) 00:54:24.537 fused_ordering(241) 00:54:24.537 fused_ordering(242) 00:54:24.537 fused_ordering(243) 00:54:24.537 fused_ordering(244) 00:54:24.537 fused_ordering(245) 00:54:24.537 fused_ordering(246) 00:54:24.537 fused_ordering(247) 00:54:24.537 fused_ordering(248) 00:54:24.537 fused_ordering(249) 00:54:24.537 fused_ordering(250) 00:54:24.537 fused_ordering(251) 00:54:24.537 fused_ordering(252) 00:54:24.537 fused_ordering(253) 00:54:24.537 fused_ordering(254) 00:54:24.537 fused_ordering(255) 00:54:24.537 fused_ordering(256) 00:54:24.537 fused_ordering(257) 00:54:24.537 fused_ordering(258) 00:54:24.537 fused_ordering(259) 00:54:24.537 fused_ordering(260) 00:54:24.537 fused_ordering(261) 00:54:24.537 fused_ordering(262) 00:54:24.537 fused_ordering(263) 00:54:24.537 fused_ordering(264) 00:54:24.537 fused_ordering(265) 00:54:24.537 fused_ordering(266) 00:54:24.537 fused_ordering(267) 00:54:24.537 fused_ordering(268) 00:54:24.537 fused_ordering(269) 00:54:24.537 fused_ordering(270) 00:54:24.537 fused_ordering(271) 00:54:24.537 fused_ordering(272) 00:54:24.537 fused_ordering(273) 00:54:24.537 fused_ordering(274) 00:54:24.537 fused_ordering(275) 00:54:24.537 fused_ordering(276) 00:54:24.538 fused_ordering(277) 00:54:24.538 fused_ordering(278) 00:54:24.538 fused_ordering(279) 00:54:24.538 fused_ordering(280) 00:54:24.538 fused_ordering(281) 00:54:24.538 fused_ordering(282) 00:54:24.538 fused_ordering(283) 00:54:24.538 fused_ordering(284) 00:54:24.538 fused_ordering(285) 00:54:24.538 fused_ordering(286) 00:54:24.538 fused_ordering(287) 00:54:24.538 fused_ordering(288) 00:54:24.538 fused_ordering(289) 00:54:24.538 fused_ordering(290) 00:54:24.538 fused_ordering(291) 00:54:24.538 fused_ordering(292) 00:54:24.538 fused_ordering(293) 00:54:24.538 fused_ordering(294) 00:54:24.538 fused_ordering(295) 00:54:24.538 fused_ordering(296) 00:54:24.538 fused_ordering(297) 00:54:24.538 fused_ordering(298) 00:54:24.538 fused_ordering(299) 00:54:24.538 fused_ordering(300) 00:54:24.538 fused_ordering(301) 00:54:24.538 fused_ordering(302) 00:54:24.538 fused_ordering(303) 00:54:24.538 fused_ordering(304) 00:54:24.538 fused_ordering(305) 00:54:24.538 fused_ordering(306) 00:54:24.538 fused_ordering(307) 00:54:24.538 fused_ordering(308) 00:54:24.538 fused_ordering(309) 00:54:24.538 fused_ordering(310) 00:54:24.538 fused_ordering(311) 00:54:24.538 fused_ordering(312) 00:54:24.538 fused_ordering(313) 00:54:24.538 fused_ordering(314) 00:54:24.538 fused_ordering(315) 00:54:24.538 fused_ordering(316) 00:54:24.538 fused_ordering(317) 00:54:24.538 fused_ordering(318) 00:54:24.538 fused_ordering(319) 00:54:24.538 fused_ordering(320) 00:54:24.538 fused_ordering(321) 00:54:24.538 fused_ordering(322) 00:54:24.538 fused_ordering(323) 00:54:24.538 fused_ordering(324) 00:54:24.538 fused_ordering(325) 00:54:24.538 fused_ordering(326) 00:54:24.538 fused_ordering(327) 00:54:24.538 fused_ordering(328) 00:54:24.538 fused_ordering(329) 00:54:24.538 fused_ordering(330) 00:54:24.538 fused_ordering(331) 00:54:24.538 fused_ordering(332) 00:54:24.538 fused_ordering(333) 00:54:24.538 fused_ordering(334) 00:54:24.538 fused_ordering(335) 00:54:24.538 fused_ordering(336) 00:54:24.538 fused_ordering(337) 00:54:24.538 fused_ordering(338) 00:54:24.538 fused_ordering(339) 00:54:24.538 fused_ordering(340) 00:54:24.538 fused_ordering(341) 00:54:24.538 fused_ordering(342) 00:54:24.538 fused_ordering(343) 00:54:24.538 fused_ordering(344) 00:54:24.538 fused_ordering(345) 00:54:24.538 fused_ordering(346) 00:54:24.538 fused_ordering(347) 00:54:24.538 fused_ordering(348) 00:54:24.538 fused_ordering(349) 00:54:24.538 fused_ordering(350) 00:54:24.538 fused_ordering(351) 00:54:24.538 fused_ordering(352) 00:54:24.538 fused_ordering(353) 00:54:24.538 fused_ordering(354) 00:54:24.538 fused_ordering(355) 00:54:24.538 fused_ordering(356) 00:54:24.538 fused_ordering(357) 00:54:24.538 fused_ordering(358) 00:54:24.538 fused_ordering(359) 00:54:24.538 fused_ordering(360) 00:54:24.538 fused_ordering(361) 00:54:24.538 fused_ordering(362) 00:54:24.538 fused_ordering(363) 00:54:24.538 fused_ordering(364) 00:54:24.538 fused_ordering(365) 00:54:24.538 fused_ordering(366) 00:54:24.538 fused_ordering(367) 00:54:24.538 fused_ordering(368) 00:54:24.538 fused_ordering(369) 00:54:24.538 fused_ordering(370) 00:54:24.538 fused_ordering(371) 00:54:24.538 fused_ordering(372) 00:54:24.538 fused_ordering(373) 00:54:24.538 fused_ordering(374) 00:54:24.538 fused_ordering(375) 00:54:24.538 fused_ordering(376) 00:54:24.538 fused_ordering(377) 00:54:24.538 fused_ordering(378) 00:54:24.538 fused_ordering(379) 00:54:24.538 fused_ordering(380) 00:54:24.538 fused_ordering(381) 00:54:24.538 fused_ordering(382) 00:54:24.538 fused_ordering(383) 00:54:24.538 fused_ordering(384) 00:54:24.538 fused_ordering(385) 00:54:24.538 fused_ordering(386) 00:54:24.538 fused_ordering(387) 00:54:24.538 fused_ordering(388) 00:54:24.538 fused_ordering(389) 00:54:24.538 fused_ordering(390) 00:54:24.538 fused_ordering(391) 00:54:24.538 fused_ordering(392) 00:54:24.538 fused_ordering(393) 00:54:24.538 fused_ordering(394) 00:54:24.538 fused_ordering(395) 00:54:24.538 fused_ordering(396) 00:54:24.538 fused_ordering(397) 00:54:24.538 fused_ordering(398) 00:54:24.538 fused_ordering(399) 00:54:24.538 fused_ordering(400) 00:54:24.538 fused_ordering(401) 00:54:24.538 fused_ordering(402) 00:54:24.538 fused_ordering(403) 00:54:24.538 fused_ordering(404) 00:54:24.538 fused_ordering(405) 00:54:24.538 fused_ordering(406) 00:54:24.538 fused_ordering(407) 00:54:24.538 fused_ordering(408) 00:54:24.538 fused_ordering(409) 00:54:24.538 fused_ordering(410) 00:54:24.796 fused_ordering(411) 00:54:24.796 fused_ordering(412) 00:54:24.796 fused_ordering(413) 00:54:24.797 fused_ordering(414) 00:54:24.797 fused_ordering(415) 00:54:24.797 fused_ordering(416) 00:54:24.797 fused_ordering(417) 00:54:24.797 fused_ordering(418) 00:54:24.797 fused_ordering(419) 00:54:24.797 fused_ordering(420) 00:54:24.797 fused_ordering(421) 00:54:24.797 fused_ordering(422) 00:54:24.797 fused_ordering(423) 00:54:24.797 fused_ordering(424) 00:54:24.797 fused_ordering(425) 00:54:24.797 fused_ordering(426) 00:54:24.797 fused_ordering(427) 00:54:24.797 fused_ordering(428) 00:54:24.797 fused_ordering(429) 00:54:24.797 fused_ordering(430) 00:54:24.797 fused_ordering(431) 00:54:24.797 fused_ordering(432) 00:54:24.797 fused_ordering(433) 00:54:24.797 fused_ordering(434) 00:54:24.797 fused_ordering(435) 00:54:24.797 fused_ordering(436) 00:54:24.797 fused_ordering(437) 00:54:24.797 fused_ordering(438) 00:54:24.797 fused_ordering(439) 00:54:24.797 fused_ordering(440) 00:54:24.797 fused_ordering(441) 00:54:24.797 fused_ordering(442) 00:54:24.797 fused_ordering(443) 00:54:24.797 fused_ordering(444) 00:54:24.797 fused_ordering(445) 00:54:24.797 fused_ordering(446) 00:54:24.797 fused_ordering(447) 00:54:24.797 fused_ordering(448) 00:54:24.797 fused_ordering(449) 00:54:24.797 fused_ordering(450) 00:54:24.797 fused_ordering(451) 00:54:24.797 fused_ordering(452) 00:54:24.797 fused_ordering(453) 00:54:24.797 fused_ordering(454) 00:54:24.797 fused_ordering(455) 00:54:24.797 fused_ordering(456) 00:54:24.797 fused_ordering(457) 00:54:24.797 fused_ordering(458) 00:54:24.797 fused_ordering(459) 00:54:24.797 fused_ordering(460) 00:54:24.797 fused_ordering(461) 00:54:24.797 fused_ordering(462) 00:54:24.797 fused_ordering(463) 00:54:24.797 fused_ordering(464) 00:54:24.797 fused_ordering(465) 00:54:24.797 fused_ordering(466) 00:54:24.797 fused_ordering(467) 00:54:24.797 fused_ordering(468) 00:54:24.797 fused_ordering(469) 00:54:24.797 fused_ordering(470) 00:54:24.797 fused_ordering(471) 00:54:24.797 fused_ordering(472) 00:54:24.797 fused_ordering(473) 00:54:24.797 fused_ordering(474) 00:54:24.797 fused_ordering(475) 00:54:24.797 fused_ordering(476) 00:54:24.797 fused_ordering(477) 00:54:24.797 fused_ordering(478) 00:54:24.797 fused_ordering(479) 00:54:24.797 fused_ordering(480) 00:54:24.797 fused_ordering(481) 00:54:24.797 fused_ordering(482) 00:54:24.797 fused_ordering(483) 00:54:24.797 fused_ordering(484) 00:54:24.797 fused_ordering(485) 00:54:24.797 fused_ordering(486) 00:54:24.797 fused_ordering(487) 00:54:24.797 fused_ordering(488) 00:54:24.797 fused_ordering(489) 00:54:24.797 fused_ordering(490) 00:54:24.797 fused_ordering(491) 00:54:24.797 fused_ordering(492) 00:54:24.797 fused_ordering(493) 00:54:24.797 fused_ordering(494) 00:54:24.797 fused_ordering(495) 00:54:24.797 fused_ordering(496) 00:54:24.797 fused_ordering(497) 00:54:24.797 fused_ordering(498) 00:54:24.797 fused_ordering(499) 00:54:24.797 fused_ordering(500) 00:54:24.797 fused_ordering(501) 00:54:24.797 fused_ordering(502) 00:54:24.797 fused_ordering(503) 00:54:24.797 fused_ordering(504) 00:54:24.797 fused_ordering(505) 00:54:24.797 fused_ordering(506) 00:54:24.797 fused_ordering(507) 00:54:24.797 fused_ordering(508) 00:54:24.797 fused_ordering(509) 00:54:24.797 fused_ordering(510) 00:54:24.797 fused_ordering(511) 00:54:24.797 fused_ordering(512) 00:54:24.797 fused_ordering(513) 00:54:24.797 fused_ordering(514) 00:54:24.797 fused_ordering(515) 00:54:24.797 fused_ordering(516) 00:54:24.797 fused_ordering(517) 00:54:24.797 fused_ordering(518) 00:54:24.797 fused_ordering(519) 00:54:24.797 fused_ordering(520) 00:54:24.797 fused_ordering(521) 00:54:24.797 fused_ordering(522) 00:54:24.797 fused_ordering(523) 00:54:24.797 fused_ordering(524) 00:54:24.797 fused_ordering(525) 00:54:24.797 fused_ordering(526) 00:54:24.797 fused_ordering(527) 00:54:24.797 fused_ordering(528) 00:54:24.797 fused_ordering(529) 00:54:24.797 fused_ordering(530) 00:54:24.797 fused_ordering(531) 00:54:24.797 fused_ordering(532) 00:54:24.797 fused_ordering(533) 00:54:24.797 fused_ordering(534) 00:54:24.797 fused_ordering(535) 00:54:24.797 fused_ordering(536) 00:54:24.797 fused_ordering(537) 00:54:24.797 fused_ordering(538) 00:54:24.797 fused_ordering(539) 00:54:24.797 fused_ordering(540) 00:54:24.797 fused_ordering(541) 00:54:24.797 fused_ordering(542) 00:54:24.797 fused_ordering(543) 00:54:24.797 fused_ordering(544) 00:54:24.797 fused_ordering(545) 00:54:24.797 fused_ordering(546) 00:54:24.797 fused_ordering(547) 00:54:24.797 fused_ordering(548) 00:54:24.797 fused_ordering(549) 00:54:24.797 fused_ordering(550) 00:54:24.797 fused_ordering(551) 00:54:24.797 fused_ordering(552) 00:54:24.797 fused_ordering(553) 00:54:24.797 fused_ordering(554) 00:54:24.797 fused_ordering(555) 00:54:24.797 fused_ordering(556) 00:54:24.797 fused_ordering(557) 00:54:24.797 fused_ordering(558) 00:54:24.797 fused_ordering(559) 00:54:24.797 fused_ordering(560) 00:54:24.797 fused_ordering(561) 00:54:24.797 fused_ordering(562) 00:54:24.797 fused_ordering(563) 00:54:24.797 fused_ordering(564) 00:54:24.797 fused_ordering(565) 00:54:24.797 fused_ordering(566) 00:54:24.797 fused_ordering(567) 00:54:24.797 fused_ordering(568) 00:54:24.797 fused_ordering(569) 00:54:24.797 fused_ordering(570) 00:54:24.797 fused_ordering(571) 00:54:24.797 fused_ordering(572) 00:54:24.797 fused_ordering(573) 00:54:24.797 fused_ordering(574) 00:54:24.797 fused_ordering(575) 00:54:24.797 fused_ordering(576) 00:54:24.797 fused_ordering(577) 00:54:24.797 fused_ordering(578) 00:54:24.797 fused_ordering(579) 00:54:24.797 fused_ordering(580) 00:54:24.797 fused_ordering(581) 00:54:24.797 fused_ordering(582) 00:54:24.797 fused_ordering(583) 00:54:24.797 fused_ordering(584) 00:54:24.797 fused_ordering(585) 00:54:24.797 fused_ordering(586) 00:54:24.797 fused_ordering(587) 00:54:24.797 fused_ordering(588) 00:54:24.797 fused_ordering(589) 00:54:24.797 fused_ordering(590) 00:54:24.797 fused_ordering(591) 00:54:24.797 fused_ordering(592) 00:54:24.797 fused_ordering(593) 00:54:24.797 fused_ordering(594) 00:54:24.797 fused_ordering(595) 00:54:24.797 fused_ordering(596) 00:54:24.797 fused_ordering(597) 00:54:24.797 fused_ordering(598) 00:54:24.797 fused_ordering(599) 00:54:24.797 fused_ordering(600) 00:54:24.797 fused_ordering(601) 00:54:24.797 fused_ordering(602) 00:54:24.797 fused_ordering(603) 00:54:24.797 fused_ordering(604) 00:54:24.797 fused_ordering(605) 00:54:24.797 fused_ordering(606) 00:54:24.797 fused_ordering(607) 00:54:24.797 fused_ordering(608) 00:54:24.797 fused_ordering(609) 00:54:24.797 fused_ordering(610) 00:54:24.797 fused_ordering(611) 00:54:24.797 fused_ordering(612) 00:54:24.797 fused_ordering(613) 00:54:24.797 fused_ordering(614) 00:54:24.797 fused_ordering(615) 00:54:25.365 fused_ordering(616) 00:54:25.365 fused_ordering(617) 00:54:25.365 fused_ordering(618) 00:54:25.365 fused_ordering(619) 00:54:25.365 fused_ordering(620) 00:54:25.365 fused_ordering(621) 00:54:25.365 fused_ordering(622) 00:54:25.365 fused_ordering(623) 00:54:25.365 fused_ordering(624) 00:54:25.365 fused_ordering(625) 00:54:25.365 fused_ordering(626) 00:54:25.365 fused_ordering(627) 00:54:25.365 fused_ordering(628) 00:54:25.365 fused_ordering(629) 00:54:25.365 fused_ordering(630) 00:54:25.365 fused_ordering(631) 00:54:25.365 fused_ordering(632) 00:54:25.365 fused_ordering(633) 00:54:25.365 fused_ordering(634) 00:54:25.365 fused_ordering(635) 00:54:25.365 fused_ordering(636) 00:54:25.365 fused_ordering(637) 00:54:25.365 fused_ordering(638) 00:54:25.365 fused_ordering(639) 00:54:25.365 fused_ordering(640) 00:54:25.365 fused_ordering(641) 00:54:25.365 fused_ordering(642) 00:54:25.365 fused_ordering(643) 00:54:25.365 fused_ordering(644) 00:54:25.365 fused_ordering(645) 00:54:25.365 fused_ordering(646) 00:54:25.365 fused_ordering(647) 00:54:25.365 fused_ordering(648) 00:54:25.365 fused_ordering(649) 00:54:25.365 fused_ordering(650) 00:54:25.365 fused_ordering(651) 00:54:25.365 fused_ordering(652) 00:54:25.365 fused_ordering(653) 00:54:25.365 fused_ordering(654) 00:54:25.365 fused_ordering(655) 00:54:25.365 fused_ordering(656) 00:54:25.365 fused_ordering(657) 00:54:25.365 fused_ordering(658) 00:54:25.365 fused_ordering(659) 00:54:25.365 fused_ordering(660) 00:54:25.365 fused_ordering(661) 00:54:25.365 fused_ordering(662) 00:54:25.365 fused_ordering(663) 00:54:25.365 fused_ordering(664) 00:54:25.365 fused_ordering(665) 00:54:25.365 fused_ordering(666) 00:54:25.365 fused_ordering(667) 00:54:25.365 fused_ordering(668) 00:54:25.365 fused_ordering(669) 00:54:25.365 fused_ordering(670) 00:54:25.365 fused_ordering(671) 00:54:25.365 fused_ordering(672) 00:54:25.365 fused_ordering(673) 00:54:25.365 fused_ordering(674) 00:54:25.365 fused_ordering(675) 00:54:25.365 fused_ordering(676) 00:54:25.365 fused_ordering(677) 00:54:25.365 fused_ordering(678) 00:54:25.365 fused_ordering(679) 00:54:25.365 fused_ordering(680) 00:54:25.365 fused_ordering(681) 00:54:25.365 fused_ordering(682) 00:54:25.365 fused_ordering(683) 00:54:25.365 fused_ordering(684) 00:54:25.365 fused_ordering(685) 00:54:25.365 fused_ordering(686) 00:54:25.365 fused_ordering(687) 00:54:25.365 fused_ordering(688) 00:54:25.365 fused_ordering(689) 00:54:25.365 fused_ordering(690) 00:54:25.365 fused_ordering(691) 00:54:25.365 fused_ordering(692) 00:54:25.365 fused_ordering(693) 00:54:25.365 fused_ordering(694) 00:54:25.365 fused_ordering(695) 00:54:25.365 fused_ordering(696) 00:54:25.365 fused_ordering(697) 00:54:25.365 fused_ordering(698) 00:54:25.365 fused_ordering(699) 00:54:25.365 fused_ordering(700) 00:54:25.365 fused_ordering(701) 00:54:25.365 fused_ordering(702) 00:54:25.365 fused_ordering(703) 00:54:25.365 fused_ordering(704) 00:54:25.365 fused_ordering(705) 00:54:25.366 fused_ordering(706) 00:54:25.366 fused_ordering(707) 00:54:25.366 fused_ordering(708) 00:54:25.366 fused_ordering(709) 00:54:25.366 fused_ordering(710) 00:54:25.366 fused_ordering(711) 00:54:25.366 fused_ordering(712) 00:54:25.366 fused_ordering(713) 00:54:25.366 fused_ordering(714) 00:54:25.366 fused_ordering(715) 00:54:25.366 fused_ordering(716) 00:54:25.366 fused_ordering(717) 00:54:25.366 fused_ordering(718) 00:54:25.366 fused_ordering(719) 00:54:25.366 fused_ordering(720) 00:54:25.366 fused_ordering(721) 00:54:25.366 fused_ordering(722) 00:54:25.366 fused_ordering(723) 00:54:25.366 fused_ordering(724) 00:54:25.366 fused_ordering(725) 00:54:25.366 fused_ordering(726) 00:54:25.366 fused_ordering(727) 00:54:25.366 fused_ordering(728) 00:54:25.366 fused_ordering(729) 00:54:25.366 fused_ordering(730) 00:54:25.366 fused_ordering(731) 00:54:25.366 fused_ordering(732) 00:54:25.366 fused_ordering(733) 00:54:25.366 fused_ordering(734) 00:54:25.366 fused_ordering(735) 00:54:25.366 fused_ordering(736) 00:54:25.366 fused_ordering(737) 00:54:25.366 fused_ordering(738) 00:54:25.366 fused_ordering(739) 00:54:25.366 fused_ordering(740) 00:54:25.366 fused_ordering(741) 00:54:25.366 fused_ordering(742) 00:54:25.366 fused_ordering(743) 00:54:25.366 fused_ordering(744) 00:54:25.366 fused_ordering(745) 00:54:25.366 fused_ordering(746) 00:54:25.366 fused_ordering(747) 00:54:25.366 fused_ordering(748) 00:54:25.366 fused_ordering(749) 00:54:25.366 fused_ordering(750) 00:54:25.366 fused_ordering(751) 00:54:25.366 fused_ordering(752) 00:54:25.366 fused_ordering(753) 00:54:25.366 fused_ordering(754) 00:54:25.366 fused_ordering(755) 00:54:25.366 fused_ordering(756) 00:54:25.366 fused_ordering(757) 00:54:25.366 fused_ordering(758) 00:54:25.366 fused_ordering(759) 00:54:25.366 fused_ordering(760) 00:54:25.366 fused_ordering(761) 00:54:25.366 fused_ordering(762) 00:54:25.366 fused_ordering(763) 00:54:25.366 fused_ordering(764) 00:54:25.366 fused_ordering(765) 00:54:25.366 fused_ordering(766) 00:54:25.366 fused_ordering(767) 00:54:25.366 fused_ordering(768) 00:54:25.366 fused_ordering(769) 00:54:25.366 fused_ordering(770) 00:54:25.366 fused_ordering(771) 00:54:25.366 fused_ordering(772) 00:54:25.366 fused_ordering(773) 00:54:25.366 fused_ordering(774) 00:54:25.366 fused_ordering(775) 00:54:25.366 fused_ordering(776) 00:54:25.366 fused_ordering(777) 00:54:25.366 fused_ordering(778) 00:54:25.366 fused_ordering(779) 00:54:25.366 fused_ordering(780) 00:54:25.366 fused_ordering(781) 00:54:25.366 fused_ordering(782) 00:54:25.366 fused_ordering(783) 00:54:25.366 fused_ordering(784) 00:54:25.366 fused_ordering(785) 00:54:25.366 fused_ordering(786) 00:54:25.366 fused_ordering(787) 00:54:25.366 fused_ordering(788) 00:54:25.366 fused_ordering(789) 00:54:25.366 fused_ordering(790) 00:54:25.366 fused_ordering(791) 00:54:25.366 fused_ordering(792) 00:54:25.366 fused_ordering(793) 00:54:25.366 fused_ordering(794) 00:54:25.366 fused_ordering(795) 00:54:25.366 fused_ordering(796) 00:54:25.366 fused_ordering(797) 00:54:25.366 fused_ordering(798) 00:54:25.366 fused_ordering(799) 00:54:25.366 fused_ordering(800) 00:54:25.366 fused_ordering(801) 00:54:25.366 fused_ordering(802) 00:54:25.366 fused_ordering(803) 00:54:25.366 fused_ordering(804) 00:54:25.366 fused_ordering(805) 00:54:25.366 fused_ordering(806) 00:54:25.366 fused_ordering(807) 00:54:25.366 fused_ordering(808) 00:54:25.366 fused_ordering(809) 00:54:25.366 fused_ordering(810) 00:54:25.366 fused_ordering(811) 00:54:25.366 fused_ordering(812) 00:54:25.366 fused_ordering(813) 00:54:25.366 fused_ordering(814) 00:54:25.366 fused_ordering(815) 00:54:25.366 fused_ordering(816) 00:54:25.366 fused_ordering(817) 00:54:25.366 fused_ordering(818) 00:54:25.366 fused_ordering(819) 00:54:25.366 fused_ordering(820) 00:54:25.625 fused_ordering(821) 00:54:25.625 fused_ordering(822) 00:54:25.626 fused_ordering(823) 00:54:25.626 fused_ordering(824) 00:54:25.626 fused_ordering(825) 00:54:25.626 fused_ordering(826) 00:54:25.626 fused_ordering(827) 00:54:25.626 fused_ordering(828) 00:54:25.626 fused_ordering(829) 00:54:25.626 fused_ordering(830) 00:54:25.626 fused_ordering(831) 00:54:25.626 fused_ordering(832) 00:54:25.626 fused_ordering(833) 00:54:25.626 fused_ordering(834) 00:54:25.626 fused_ordering(835) 00:54:25.626 fused_ordering(836) 00:54:25.626 fused_ordering(837) 00:54:25.626 fused_ordering(838) 00:54:25.626 fused_ordering(839) 00:54:25.626 fused_ordering(840) 00:54:25.626 fused_ordering(841) 00:54:25.626 fused_ordering(842) 00:54:25.626 fused_ordering(843) 00:54:25.626 fused_ordering(844) 00:54:25.626 fused_ordering(845) 00:54:25.626 fused_ordering(846) 00:54:25.626 fused_ordering(847) 00:54:25.626 fused_ordering(848) 00:54:25.626 fused_ordering(849) 00:54:25.626 fused_ordering(850) 00:54:25.626 fused_ordering(851) 00:54:25.626 fused_ordering(852) 00:54:25.626 fused_ordering(853) 00:54:25.626 fused_ordering(854) 00:54:25.626 fused_ordering(855) 00:54:25.626 fused_ordering(856) 00:54:25.626 fused_ordering(857) 00:54:25.626 fused_ordering(858) 00:54:25.626 fused_ordering(859) 00:54:25.626 fused_ordering(860) 00:54:25.626 fused_ordering(861) 00:54:25.626 fused_ordering(862) 00:54:25.626 fused_ordering(863) 00:54:25.626 fused_ordering(864) 00:54:25.626 fused_ordering(865) 00:54:25.626 fused_ordering(866) 00:54:25.626 fused_ordering(867) 00:54:25.626 fused_ordering(868) 00:54:25.626 fused_ordering(869) 00:54:25.626 fused_ordering(870) 00:54:25.626 fused_ordering(871) 00:54:25.626 fused_ordering(872) 00:54:25.626 fused_ordering(873) 00:54:25.626 fused_ordering(874) 00:54:25.626 fused_ordering(875) 00:54:25.626 fused_ordering(876) 00:54:25.626 fused_ordering(877) 00:54:25.626 fused_ordering(878) 00:54:25.626 fused_ordering(879) 00:54:25.626 fused_ordering(880) 00:54:25.626 fused_ordering(881) 00:54:25.626 fused_ordering(882) 00:54:25.626 fused_ordering(883) 00:54:25.626 fused_ordering(884) 00:54:25.626 fused_ordering(885) 00:54:25.626 fused_ordering(886) 00:54:25.626 fused_ordering(887) 00:54:25.626 fused_ordering(888) 00:54:25.626 fused_ordering(889) 00:54:25.626 fused_ordering(890) 00:54:25.626 fused_ordering(891) 00:54:25.626 fused_ordering(892) 00:54:25.626 fused_ordering(893) 00:54:25.626 fused_ordering(894) 00:54:25.626 fused_ordering(895) 00:54:25.626 fused_ordering(896) 00:54:25.626 fused_ordering(897) 00:54:25.626 fused_ordering(898) 00:54:25.626 fused_ordering(899) 00:54:25.626 fused_ordering(900) 00:54:25.626 fused_ordering(901) 00:54:25.626 fused_ordering(902) 00:54:25.626 fused_ordering(903) 00:54:25.626 fused_ordering(904) 00:54:25.626 fused_ordering(905) 00:54:25.626 fused_ordering(906) 00:54:25.626 fused_ordering(907) 00:54:25.626 fused_ordering(908) 00:54:25.626 fused_ordering(909) 00:54:25.626 fused_ordering(910) 00:54:25.626 fused_ordering(911) 00:54:25.626 fused_ordering(912) 00:54:25.626 fused_ordering(913) 00:54:25.626 fused_ordering(914) 00:54:25.626 fused_ordering(915) 00:54:25.626 fused_ordering(916) 00:54:25.626 fused_ordering(917) 00:54:25.626 fused_ordering(918) 00:54:25.626 fused_ordering(919) 00:54:25.626 fused_ordering(920) 00:54:25.626 fused_ordering(921) 00:54:25.626 fused_ordering(922) 00:54:25.626 fused_ordering(923) 00:54:25.626 fused_ordering(924) 00:54:25.626 fused_ordering(925) 00:54:25.626 fused_ordering(926) 00:54:25.626 fused_ordering(927) 00:54:25.626 fused_ordering(928) 00:54:25.626 fused_ordering(929) 00:54:25.626 fused_ordering(930) 00:54:25.626 fused_ordering(931) 00:54:25.626 fused_ordering(932) 00:54:25.626 fused_ordering(933) 00:54:25.626 fused_ordering(934) 00:54:25.626 fused_ordering(935) 00:54:25.626 fused_ordering(936) 00:54:25.626 fused_ordering(937) 00:54:25.626 fused_ordering(938) 00:54:25.626 fused_ordering(939) 00:54:25.626 fused_ordering(940) 00:54:25.626 fused_ordering(941) 00:54:25.626 fused_ordering(942) 00:54:25.626 fused_ordering(943) 00:54:25.626 fused_ordering(944) 00:54:25.626 fused_ordering(945) 00:54:25.626 fused_ordering(946) 00:54:25.626 fused_ordering(947) 00:54:25.626 fused_ordering(948) 00:54:25.626 fused_ordering(949) 00:54:25.626 fused_ordering(950) 00:54:25.626 fused_ordering(951) 00:54:25.626 fused_ordering(952) 00:54:25.626 fused_ordering(953) 00:54:25.626 fused_ordering(954) 00:54:25.626 fused_ordering(955) 00:54:25.626 fused_ordering(956) 00:54:25.626 fused_ordering(957) 00:54:25.626 fused_ordering(958) 00:54:25.626 fused_ordering(959) 00:54:25.626 fused_ordering(960) 00:54:25.626 fused_ordering(961) 00:54:25.626 fused_ordering(962) 00:54:25.626 fused_ordering(963) 00:54:25.626 fused_ordering(964) 00:54:25.626 fused_ordering(965) 00:54:25.626 fused_ordering(966) 00:54:25.626 fused_ordering(967) 00:54:25.626 fused_ordering(968) 00:54:25.626 fused_ordering(969) 00:54:25.626 fused_ordering(970) 00:54:25.626 fused_ordering(971) 00:54:25.626 fused_ordering(972) 00:54:25.626 fused_ordering(973) 00:54:25.626 fused_ordering(974) 00:54:25.626 fused_ordering(975) 00:54:25.626 fused_ordering(976) 00:54:25.626 fused_ordering(977) 00:54:25.626 fused_ordering(978) 00:54:25.626 fused_ordering(979) 00:54:25.626 fused_ordering(980) 00:54:25.626 fused_ordering(981) 00:54:25.626 fused_ordering(982) 00:54:25.626 fused_ordering(983) 00:54:25.626 fused_ordering(984) 00:54:25.626 fused_ordering(985) 00:54:25.626 fused_ordering(986) 00:54:25.626 fused_ordering(987) 00:54:25.626 fused_ordering(988) 00:54:25.626 fused_ordering(989) 00:54:25.626 fused_ordering(990) 00:54:25.626 fused_ordering(991) 00:54:25.626 fused_ordering(992) 00:54:25.626 fused_ordering(993) 00:54:25.626 fused_ordering(994) 00:54:25.626 fused_ordering(995) 00:54:25.626 fused_ordering(996) 00:54:25.626 fused_ordering(997) 00:54:25.626 fused_ordering(998) 00:54:25.626 fused_ordering(999) 00:54:25.626 fused_ordering(1000) 00:54:25.626 fused_ordering(1001) 00:54:25.626 fused_ordering(1002) 00:54:25.626 fused_ordering(1003) 00:54:25.626 fused_ordering(1004) 00:54:25.626 fused_ordering(1005) 00:54:25.626 fused_ordering(1006) 00:54:25.626 fused_ordering(1007) 00:54:25.626 fused_ordering(1008) 00:54:25.626 fused_ordering(1009) 00:54:25.626 fused_ordering(1010) 00:54:25.626 fused_ordering(1011) 00:54:25.626 fused_ordering(1012) 00:54:25.626 fused_ordering(1013) 00:54:25.626 fused_ordering(1014) 00:54:25.626 fused_ordering(1015) 00:54:25.626 fused_ordering(1016) 00:54:25.626 fused_ordering(1017) 00:54:25.626 fused_ordering(1018) 00:54:25.626 fused_ordering(1019) 00:54:25.626 fused_ordering(1020) 00:54:25.626 fused_ordering(1021) 00:54:25.626 fused_ordering(1022) 00:54:25.626 fused_ordering(1023) 00:54:25.626 10:49:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:54:25.626 10:49:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:54:25.626 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:25.626 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:25.905 rmmod nvme_tcp 00:54:25.905 rmmod nvme_fabrics 00:54:25.905 rmmod nvme_keyring 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 87990 ']' 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 87990 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 87990 ']' 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 87990 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:54:25.905 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:25.906 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87990 00:54:25.906 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:54:25.906 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:54:25.906 killing process with pid 87990 00:54:25.906 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87990' 00:54:25.906 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 87990 00:54:25.906 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 87990 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:26.164 00:54:26.164 real 0m3.754s 00:54:26.164 user 0m4.196s 00:54:26.164 sys 0m1.367s 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:54:26.164 10:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:54:26.164 ************************************ 00:54:26.164 END TEST nvmf_fused_ordering 00:54:26.164 ************************************ 00:54:26.164 10:49:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:54:26.164 10:49:34 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:54:26.164 10:49:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:54:26.164 10:49:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:54:26.164 10:49:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:26.164 ************************************ 00:54:26.164 START TEST nvmf_delete_subsystem 00:54:26.164 ************************************ 00:54:26.164 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:54:26.423 * Looking for test storage... 00:54:26.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:26.423 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:26.424 Cannot find device "nvmf_tgt_br" 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:26.424 Cannot find device "nvmf_tgt_br2" 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:26.424 Cannot find device "nvmf_tgt_br" 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:26.424 Cannot find device "nvmf_tgt_br2" 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:26.424 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:26.682 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:26.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:26.682 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:26.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:26.683 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:26.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:26.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:54:26.941 00:54:26.941 --- 10.0.0.2 ping statistics --- 00:54:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.941 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:26.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:26.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:54:26.941 00:54:26.941 --- 10.0.0.3 ping statistics --- 00:54:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.941 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:26.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:26.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:54:26.941 00:54:26.941 --- 10.0.0.1 ping statistics --- 00:54:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.941 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=88231 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 88231 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 88231 ']' 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:26.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:26.941 10:49:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:26.942 [2024-07-22 10:49:34.750348] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:26.942 [2024-07-22 10:49:34.750424] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:26.942 [2024-07-22 10:49:34.872310] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:27.200 [2024-07-22 10:49:34.881293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:54:27.201 [2024-07-22 10:49:34.925231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:27.201 [2024-07-22 10:49:34.925513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:27.201 [2024-07-22 10:49:34.925605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:27.201 [2024-07-22 10:49:34.925650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:27.201 [2024-07-22 10:49:34.925675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:27.201 [2024-07-22 10:49:34.926346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:54:27.201 [2024-07-22 10:49:34.926345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:27.767 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:28.026 [2024-07-22 10:49:35.698399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:28.026 [2024-07-22 10:49:35.722538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:28.026 NULL1 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:28.026 Delay0 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=88282 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:54:28.026 10:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:54:28.026 [2024-07-22 10:49:35.928643] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:54:30.013 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:30.013 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:30.013 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Write completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 starting I/O failed: -6 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.272 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 starting I/O failed: -6 00:54:30.273 starting I/O failed: -6 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 starting I/O failed: -6 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 [2024-07-22 10:49:37.963765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd8ec000c00 is same with the state(5) to be set 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:30.273 Read completed with error (sct=0, sc=8) 00:54:30.273 Write completed with error (sct=0, sc=8) 00:54:31.210 [2024-07-22 10:49:38.940884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabcc70 is same with the state(5) to be set 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 [2024-07-22 10:49:38.955676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd8ec00cff0 is same with the state(5) to be set 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 [2024-07-22 10:49:38.956062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad49f0 is same with the state(5) to be set 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 [2024-07-22 10:49:38.956803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd8ec00d770 is same with the state(5) to be set 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.210 Write completed with error (sct=0, sc=8) 00:54:31.210 Read completed with error (sct=0, sc=8) 00:54:31.211 Write completed with error (sct=0, sc=8) 00:54:31.211 Read completed with error (sct=0, sc=8) 00:54:31.211 [2024-07-22 10:49:38.957240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabce50 is same with the state(5) to be set 00:54:31.211 Initializing NVMe Controllers 00:54:31.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:31.211 Controller IO queue size 128, less than required. 00:54:31.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:31.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:54:31.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:54:31.211 Initialization complete. Launching workers. 00:54:31.211 ======================================================== 00:54:31.211 Latency(us) 00:54:31.211 Device Information : IOPS MiB/s Average min max 00:54:31.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.84 0.09 899909.80 491.78 1008297.33 00:54:31.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.46 0.08 999022.90 1078.23 2002780.21 00:54:31.211 ======================================================== 00:54:31.211 Total : 343.30 0.17 944791.20 491.78 2002780.21 00:54:31.211 00:54:31.211 [2024-07-22 10:49:38.958490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabcc70 (9): Bad file descriptor 00:54:31.211 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:54:31.211 10:49:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:31.211 10:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:54:31.211 10:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 88282 00:54:31.211 10:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 88282 00:54:31.778 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (88282) - No such process 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 88282 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 88282 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 88282 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:31.778 [2024-07-22 10:49:39.489510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=88322 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:31.778 10:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:31.778 [2024-07-22 10:49:39.675717] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:54:32.346 10:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:32.346 10:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:32.346 10:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:32.604 10:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:32.604 10:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:32.604 10:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:33.171 10:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:33.171 10:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:33.171 10:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:33.737 10:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:33.737 10:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:33.737 10:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:34.303 10:49:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:34.303 10:49:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:34.303 10:49:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:34.870 10:49:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:34.870 10:49:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:34.870 10:49:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:34.870 Initializing NVMe Controllers 00:54:34.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:34.870 Controller IO queue size 128, less than required. 00:54:34.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:34.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:54:34.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:54:34.870 Initialization complete. Launching workers. 00:54:34.870 ======================================================== 00:54:34.870 Latency(us) 00:54:34.870 Device Information : IOPS MiB/s Average min max 00:54:34.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002097.38 1000104.05 1005914.18 00:54:34.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003690.67 1000108.80 1011343.41 00:54:34.870 ======================================================== 00:54:34.870 Total : 256.00 0.12 1002894.03 1000104.05 1011343.41 00:54:34.870 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88322 00:54:35.128 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (88322) - No such process 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 88322 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:35.128 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:54:35.386 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:35.386 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:54:35.386 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:35.386 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:35.386 rmmod nvme_tcp 00:54:35.386 rmmod nvme_fabrics 00:54:35.387 rmmod nvme_keyring 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 88231 ']' 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 88231 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 88231 ']' 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 88231 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88231 00:54:35.387 killing process with pid 88231 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88231' 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 88231 00:54:35.387 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 88231 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:35.645 ************************************ 00:54:35.645 END TEST nvmf_delete_subsystem 00:54:35.645 ************************************ 00:54:35.645 00:54:35.645 real 0m9.435s 00:54:35.645 user 0m28.061s 00:54:35.645 sys 0m2.173s 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:54:35.645 10:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:35.645 10:49:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:54:35.645 10:49:43 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:54:35.645 10:49:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:54:35.645 10:49:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:54:35.645 10:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:35.645 ************************************ 00:54:35.645 START TEST nvmf_ns_masking 00:54:35.645 ************************************ 00:54:35.645 10:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:54:35.903 * Looking for test storage... 00:54:35.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=48df46af-b082-4134-ac1c-b38e26b81171 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=36963663-09e4-4272-9a91-703909c5b9c4 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e00087d1-973a-4585-b2e6-3a82e4828811 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:35.903 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:35.904 Cannot find device "nvmf_tgt_br" 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:35.904 Cannot find device "nvmf_tgt_br2" 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:35.904 Cannot find device "nvmf_tgt_br" 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:35.904 Cannot find device "nvmf_tgt_br2" 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:54:35.904 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:36.161 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:36.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:36.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:36.162 10:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:36.162 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:36.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:36.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:54:36.426 00:54:36.426 --- 10.0.0.2 ping statistics --- 00:54:36.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:36.426 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:36.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:36.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:54:36.426 00:54:36.426 --- 10.0.0.3 ping statistics --- 00:54:36.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:36.426 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:36.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:36.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:54:36.426 00:54:36.426 --- 10.0.0.1 ping statistics --- 00:54:36.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:36.426 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:36.426 10:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=88566 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 88566 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 88566 ']' 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:36.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:36.427 10:49:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:54:36.427 [2024-07-22 10:49:44.212881] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:36.427 [2024-07-22 10:49:44.213155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:36.427 [2024-07-22 10:49:44.331885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:36.427 [2024-07-22 10:49:44.357410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:36.686 [2024-07-22 10:49:44.399880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:36.686 [2024-07-22 10:49:44.399928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:36.686 [2024-07-22 10:49:44.399937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:36.686 [2024-07-22 10:49:44.399945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:36.686 [2024-07-22 10:49:44.399951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:36.686 [2024-07-22 10:49:44.399979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:37.252 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:37.510 [2024-07-22 10:49:45.283189] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:37.510 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:54:37.510 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:54:37.510 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:54:37.768 Malloc1 00:54:37.768 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:54:37.768 Malloc2 00:54:38.026 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:54:38.026 10:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:54:38.284 10:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:38.543 [2024-07-22 10:49:46.263026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e00087d1-973a-4585-b2e6-3a82e4828811 -a 10.0.0.2 -s 4420 -i 4 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:54:38.543 10:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:41.076 [ 0]:0x1 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd790f002d4f44e798f9f50a56a604cb 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd790f002d4f44e798f9f50a56a604cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:41.076 [ 0]:0x1 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd790f002d4f44e798f9f50a56a604cb 00:54:41.076 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd790f002d4f44e798f9f50a56a604cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:41.077 [ 1]:0x2 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:54:41.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:54:41.077 10:49:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:41.335 10:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e00087d1-973a-4585-b2e6-3a82e4828811 -a 10.0.0.2 -s 4420 -i 4 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:54:41.593 10:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:54:43.505 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:54:43.505 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:54:43.505 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:43.763 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:43.764 [ 0]:0x2 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:43.764 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:44.023 [ 0]:0x1 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd790f002d4f44e798f9f50a56a604cb 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd790f002d4f44e798f9f50a56a604cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:44.023 [ 1]:0x2 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:44.023 10:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:44.281 [ 0]:0x2 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:54:44.281 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:54:44.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:54:44.539 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:54:44.539 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:54:44.539 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e00087d1-973a-4585-b2e6-3a82e4828811 -a 10.0.0.2 -s 4420 -i 4 00:54:44.797 10:49:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:54:44.797 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:54:44.797 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:54:44.797 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:54:44.797 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:54:44.797 10:49:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:54:46.697 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:46.957 [ 0]:0x1 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd790f002d4f44e798f9f50a56a604cb 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd790f002d4f44e798f9f50a56a604cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:46.957 [ 1]:0x2 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:46.957 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:47.216 [ 0]:0x2 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:47.216 10:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:54:47.216 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:54:47.476 [2024-07-22 10:49:55.194400] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:54:47.476 2024/07/22 10:49:55 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:54:47.476 request: 00:54:47.476 { 00:54:47.476 "method": "nvmf_ns_remove_host", 00:54:47.476 "params": { 00:54:47.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:47.476 "nsid": 2, 00:54:47.476 "host": "nqn.2016-06.io.spdk:host1" 00:54:47.476 } 00:54:47.476 } 00:54:47.476 Got JSON-RPC error response 00:54:47.476 GoRPCClient: error on JSON-RPC call 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:54:47.476 [ 0]:0x2 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=944cab71e5f34abb98675581744fc8e9 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 944cab71e5f34abb98675581744fc8e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:54:47.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=88935 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 88935 /var/tmp/host.sock 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 88935 ']' 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:47.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:47.476 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:54:47.783 [2024-07-22 10:49:55.427169] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:47.783 [2024-07-22 10:49:55.427668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88935 ] 00:54:47.783 [2024-07-22 10:49:55.545959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:47.783 [2024-07-22 10:49:55.571084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:47.783 [2024-07-22 10:49:55.618328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:54:48.379 10:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:48.379 10:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:54:48.379 10:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:48.638 10:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:54:48.897 10:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 48df46af-b082-4134-ac1c-b38e26b81171 00:54:48.897 10:49:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:54:48.897 10:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 48DF46AFB0824134AC1CB38E26B81171 -i 00:54:49.155 10:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 36963663-09e4-4272-9a91-703909c5b9c4 00:54:49.155 10:49:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:54:49.155 10:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3696366309E442729A91703909C5B9C4 -i 00:54:49.155 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:54:49.414 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:54:49.673 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:54:49.673 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:54:49.932 nvme0n1 00:54:49.932 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:54:49.932 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:54:50.191 nvme1n2 00:54:50.191 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:54:50.191 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:54:50.191 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:54:50.191 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:54:50.191 10:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 48df46af-b082-4134-ac1c-b38e26b81171 == \4\8\d\f\4\6\a\f\-\b\0\8\2\-\4\1\3\4\-\a\c\1\c\-\b\3\8\e\2\6\b\8\1\1\7\1 ]] 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:54:50.451 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 36963663-09e4-4272-9a91-703909c5b9c4 == \3\6\9\6\3\6\6\3\-\0\9\e\4\-\4\2\7\2\-\9\a\9\1\-\7\0\3\9\0\9\c\5\b\9\c\4 ]] 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 88935 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 88935 ']' 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 88935 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88935 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:54:50.710 killing process with pid 88935 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88935' 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 88935 00:54:50.710 10:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 88935 00:54:50.969 10:49:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:51.228 10:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:54:51.228 10:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:54:51.228 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:51.228 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:51.487 rmmod nvme_tcp 00:54:51.487 rmmod nvme_fabrics 00:54:51.487 rmmod nvme_keyring 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 88566 ']' 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 88566 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 88566 ']' 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 88566 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88566 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:54:51.487 killing process with pid 88566 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88566' 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 88566 00:54:51.487 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 88566 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:51.746 00:54:51.746 real 0m15.998s 00:54:51.746 user 0m23.061s 00:54:51.746 sys 0m3.367s 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:54:51.746 10:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:54:51.746 ************************************ 00:54:51.746 END TEST nvmf_ns_masking 00:54:51.746 ************************************ 00:54:51.746 10:49:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:54:51.746 10:49:59 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:54:51.746 10:49:59 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:54:51.746 10:49:59 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:54:51.746 10:49:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:54:51.746 10:49:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:54:51.746 10:49:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:51.746 ************************************ 00:54:51.746 START TEST nvmf_host_management 00:54:51.746 ************************************ 00:54:51.746 10:49:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:54:52.005 * Looking for test storage... 00:54:52.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:54:52.005 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:52.006 Cannot find device "nvmf_tgt_br" 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:52.006 Cannot find device "nvmf_tgt_br2" 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:52.006 Cannot find device "nvmf_tgt_br" 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:52.006 Cannot find device "nvmf_tgt_br2" 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:52.006 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:52.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:52.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:52.265 10:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:52.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:52.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:54:52.265 00:54:52.265 --- 10.0.0.2 ping statistics --- 00:54:52.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:52.265 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:52.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:52.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:54:52.265 00:54:52.265 --- 10.0.0.3 ping statistics --- 00:54:52.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:52.265 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:52.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:52.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:54:52.265 00:54:52.265 --- 10.0.0.1 ping statistics --- 00:54:52.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:52.265 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=89295 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 89295 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 89295 ']' 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:52.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:52.265 10:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:52.525 [2024-07-22 10:50:00.236859] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:52.525 [2024-07-22 10:50:00.236934] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:52.525 [2024-07-22 10:50:00.356012] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:52.525 [2024-07-22 10:50:00.381567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:52.525 [2024-07-22 10:50:00.426701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:52.525 [2024-07-22 10:50:00.426754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:52.525 [2024-07-22 10:50:00.426763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:52.525 [2024-07-22 10:50:00.426771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:52.525 [2024-07-22 10:50:00.426778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:52.525 [2024-07-22 10:50:00.426967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:54:52.525 [2024-07-22 10:50:00.427165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:54:52.525 [2024-07-22 10:50:00.427860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:54:52.525 [2024-07-22 10:50:00.427862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:53.464 [2024-07-22 10:50:01.133897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:53.464 Malloc0 00:54:53.464 [2024-07-22 10:50:01.208736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=89366 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 89366 /var/tmp/bdevperf.sock 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 89366 ']' 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:53.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:54:53.464 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:54:53.465 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:53.465 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:53.465 { 00:54:53.465 "params": { 00:54:53.465 "name": "Nvme$subsystem", 00:54:53.465 "trtype": "$TEST_TRANSPORT", 00:54:53.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:53.465 "adrfam": "ipv4", 00:54:53.465 "trsvcid": "$NVMF_PORT", 00:54:53.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:53.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:53.465 "hdgst": ${hdgst:-false}, 00:54:53.465 "ddgst": ${ddgst:-false} 00:54:53.465 }, 00:54:53.465 "method": "bdev_nvme_attach_controller" 00:54:53.465 } 00:54:53.465 EOF 00:54:53.465 )") 00:54:53.465 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:54:53.465 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:54:53.465 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:54:53.465 10:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:53.465 "params": { 00:54:53.465 "name": "Nvme0", 00:54:53.465 "trtype": "tcp", 00:54:53.465 "traddr": "10.0.0.2", 00:54:53.465 "adrfam": "ipv4", 00:54:53.465 "trsvcid": "4420", 00:54:53.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:53.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:53.465 "hdgst": false, 00:54:53.465 "ddgst": false 00:54:53.465 }, 00:54:53.465 "method": "bdev_nvme_attach_controller" 00:54:53.465 }' 00:54:53.465 [2024-07-22 10:50:01.328370] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:53.465 [2024-07-22 10:50:01.328585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89366 ] 00:54:53.724 [2024-07-22 10:50:01.446738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:53.724 [2024-07-22 10:50:01.470161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:53.724 [2024-07-22 10:50:01.514004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:54:53.982 Running I/O for 10 seconds... 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1194 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1194 -ge 100 ']' 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:54:54.551 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:54:54.552 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:54:54.552 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:54.552 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:54.552 [2024-07-22 10:50:02.252562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2b00 is same with the state(5) to be set 00:54:54.552 [2024-07-22 10:50:02.252959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.252986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.552 [2024-07-22 10:50:02.253610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.552 [2024-07-22 10:50:02.253621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.253987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.253995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:54.553 [2024-07-22 10:50:02.254178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:54.553 [2024-07-22 10:50:02.254205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:54:54.553 [2024-07-22 10:50:02.254260] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24780e0 was disconnected and freed. reset controller. 00:54:54.553 [2024-07-22 10:50:02.255131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:54:54.553 task offset: 32768 on job bdev=Nvme0n1 fails 00:54:54.553 00:54:54.553 Latency(us) 00:54:54.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:54.553 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:54:54.553 Job: Nvme0n1 ended in about 0.59 seconds with error 00:54:54.553 Verification LBA range: start 0x0 length 0x400 00:54:54.553 Nvme0n1 : 0.59 2159.07 134.94 107.95 0.00 27639.47 2368.77 25582.73 00:54:54.553 =================================================================================================================== 00:54:54.553 Total : 2159.07 134.94 107.95 0.00 27639.47 2368.77 25582.73 00:54:54.553 [2024-07-22 10:50:02.256827] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:54:54.553 [2024-07-22 10:50:02.256849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc2850 (9): Bad file descriptor 00:54:54.553 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:54.553 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:54:54.553 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:54.553 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:54.553 [2024-07-22 10:50:02.268857] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:54:54.553 10:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:54.553 10:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 89366 00:54:55.492 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (89366) - No such process 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:55.492 { 00:54:55.492 "params": { 00:54:55.492 "name": "Nvme$subsystem", 00:54:55.492 "trtype": "$TEST_TRANSPORT", 00:54:55.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:55.492 "adrfam": "ipv4", 00:54:55.492 "trsvcid": "$NVMF_PORT", 00:54:55.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:55.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:55.492 "hdgst": ${hdgst:-false}, 00:54:55.492 "ddgst": ${ddgst:-false} 00:54:55.492 }, 00:54:55.492 "method": "bdev_nvme_attach_controller" 00:54:55.492 } 00:54:55.492 EOF 00:54:55.492 )") 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:54:55.492 10:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:55.492 "params": { 00:54:55.492 "name": "Nvme0", 00:54:55.492 "trtype": "tcp", 00:54:55.492 "traddr": "10.0.0.2", 00:54:55.492 "adrfam": "ipv4", 00:54:55.492 "trsvcid": "4420", 00:54:55.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:55.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:55.492 "hdgst": false, 00:54:55.492 "ddgst": false 00:54:55.492 }, 00:54:55.492 "method": "bdev_nvme_attach_controller" 00:54:55.492 }' 00:54:55.492 [2024-07-22 10:50:03.331994] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:55.492 [2024-07-22 10:50:03.332065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89416 ] 00:54:55.750 [2024-07-22 10:50:03.450253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:55.750 [2024-07-22 10:50:03.473984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:55.750 [2024-07-22 10:50:03.518073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:54:55.750 Running I/O for 1 seconds... 00:54:57.128 00:54:57.128 Latency(us) 00:54:57.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:57.128 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:54:57.128 Verification LBA range: start 0x0 length 0x400 00:54:57.128 Nvme0n1 : 1.03 2246.83 140.43 0.00 0.00 28028.30 3632.12 25688.01 00:54:57.128 =================================================================================================================== 00:54:57.128 Total : 2246.83 140.43 0.00 0.00 28028.30 3632.12 25688.01 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:57.128 10:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:57.128 rmmod nvme_tcp 00:54:57.128 rmmod nvme_fabrics 00:54:57.128 rmmod nvme_keyring 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 89295 ']' 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 89295 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 89295 ']' 00:54:57.128 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 89295 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89295 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89295' 00:54:57.129 killing process with pid 89295 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 89295 00:54:57.129 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 89295 00:54:57.387 [2024-07-22 10:50:05.221886] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:54:57.387 00:54:57.387 real 0m5.706s 00:54:57.387 user 0m21.509s 00:54:57.387 sys 0m1.570s 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:54:57.387 ************************************ 00:54:57.387 END TEST nvmf_host_management 00:54:57.387 10:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:57.387 ************************************ 00:54:57.646 10:50:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:54:57.646 10:50:05 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:54:57.646 10:50:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:54:57.646 10:50:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:54:57.646 10:50:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:57.646 ************************************ 00:54:57.646 START TEST nvmf_lvol 00:54:57.646 ************************************ 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:54:57.646 * Looking for test storage... 00:54:57.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:57.646 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:57.647 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:57.906 Cannot find device "nvmf_tgt_br" 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:57.906 Cannot find device "nvmf_tgt_br2" 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:57.906 Cannot find device "nvmf_tgt_br" 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:57.906 Cannot find device "nvmf_tgt_br2" 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:57.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:57.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:57.906 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:58.164 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:58.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:58.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:54:58.164 00:54:58.164 --- 10.0.0.2 ping statistics --- 00:54:58.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:58.164 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:58.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:58.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:54:58.165 00:54:58.165 --- 10.0.0.3 ping statistics --- 00:54:58.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:58.165 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:58.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:58.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:54:58.165 00:54:58.165 --- 10.0.0.1 ping statistics --- 00:54:58.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:58.165 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:54:58.165 10:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=89621 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 89621 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 89621 ']' 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:54:58.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:54:58.165 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:58.165 [2024-07-22 10:50:06.058867] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:54:58.165 [2024-07-22 10:50:06.058940] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:58.423 [2024-07-22 10:50:06.177948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:54:58.423 [2024-07-22 10:50:06.199962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:54:58.423 [2024-07-22 10:50:06.243890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:58.423 [2024-07-22 10:50:06.243938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:58.423 [2024-07-22 10:50:06.243948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:58.423 [2024-07-22 10:50:06.243956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:58.423 [2024-07-22 10:50:06.243963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:58.423 [2024-07-22 10:50:06.244187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:54:58.423 [2024-07-22 10:50:06.244364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:54:58.423 [2024-07-22 10:50:06.244366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:54:58.989 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:54:58.989 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:54:58.989 10:50:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:58.989 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:54:58.989 10:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:59.247 10:50:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:59.247 10:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:59.247 [2024-07-22 10:50:07.129407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:59.247 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:54:59.504 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:54:59.504 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:54:59.762 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:54:59.762 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:55:00.019 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:55:00.277 10:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=488014c2-8621-44df-ab4a-2bc39bbe7e9a 00:55:00.277 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 488014c2-8621-44df-ab4a-2bc39bbe7e9a lvol 20 00:55:00.277 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7a981a88-6b37-40ee-ab11-6401dff7fb21 00:55:00.277 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:55:00.535 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a981a88-6b37-40ee-ab11-6401dff7fb21 00:55:00.793 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:01.052 [2024-07-22 10:50:08.727647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:01.052 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:55:01.052 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=89763 00:55:01.052 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:55:01.052 10:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:55:02.429 10:50:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7a981a88-6b37-40ee-ab11-6401dff7fb21 MY_SNAPSHOT 00:55:02.429 10:50:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5d8b879d-af52-4563-b6a6-a4b830148fa4 00:55:02.429 10:50:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7a981a88-6b37-40ee-ab11-6401dff7fb21 30 00:55:02.687 10:50:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5d8b879d-af52-4563-b6a6-a4b830148fa4 MY_CLONE 00:55:02.945 10:50:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b87a6943-9e07-4256-8e78-1624823e9d9f 00:55:02.945 10:50:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b87a6943-9e07-4256-8e78-1624823e9d9f 00:55:03.512 10:50:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 89763 00:55:11.624 Initializing NVMe Controllers 00:55:11.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:55:11.624 Controller IO queue size 128, less than required. 00:55:11.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:55:11.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:55:11.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:55:11.624 Initialization complete. Launching workers. 00:55:11.624 ======================================================== 00:55:11.624 Latency(us) 00:55:11.624 Device Information : IOPS MiB/s Average min max 00:55:11.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12601.90 49.23 10162.09 1903.43 53488.69 00:55:11.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12631.40 49.34 10134.83 3216.34 56882.72 00:55:11.624 ======================================================== 00:55:11.624 Total : 25233.30 98.57 10148.45 1903.43 56882.72 00:55:11.624 00:55:11.624 10:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:11.624 10:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7a981a88-6b37-40ee-ab11-6401dff7fb21 00:55:11.881 10:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 488014c2-8621-44df-ab4a-2bc39bbe7e9a 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:12.138 rmmod nvme_tcp 00:55:12.138 rmmod nvme_fabrics 00:55:12.138 rmmod nvme_keyring 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 89621 ']' 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 89621 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 89621 ']' 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 89621 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:55:12.138 10:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89621 00:55:12.138 killing process with pid 89621 00:55:12.138 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:55:12.138 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:55:12.138 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89621' 00:55:12.138 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 89621 00:55:12.138 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 89621 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:55:12.397 00:55:12.397 real 0m14.906s 00:55:12.397 user 1m0.935s 00:55:12.397 sys 0m5.003s 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:55:12.397 10:50:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:55:12.397 ************************************ 00:55:12.397 END TEST nvmf_lvol 00:55:12.397 ************************************ 00:55:12.659 10:50:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:55:12.659 10:50:20 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:55:12.659 10:50:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:55:12.659 10:50:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:55:12.659 10:50:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:55:12.659 ************************************ 00:55:12.659 START TEST nvmf_lvs_grow 00:55:12.659 ************************************ 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:55:12.659 * Looking for test storage... 00:55:12.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:55:12.659 Cannot find device "nvmf_tgt_br" 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:55:12.659 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:55:12.659 Cannot find device "nvmf_tgt_br2" 00:55:12.660 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:55:12.660 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:55:13.007 Cannot find device "nvmf_tgt_br" 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:55:13.007 Cannot find device "nvmf_tgt_br2" 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:13.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:13.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:55:13.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:13.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:55:13.007 00:55:13.007 --- 10.0.0.2 ping statistics --- 00:55:13.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:13.007 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:55:13.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:13.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:55:13.007 00:55:13.007 --- 10.0.0.3 ping statistics --- 00:55:13.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:13.007 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:13.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:13.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:55:13.007 00:55:13.007 --- 10.0.0.1 ping statistics --- 00:55:13.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:13.007 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:13.007 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=90131 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 90131 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 90131 ']' 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:13.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:55:13.267 10:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:13.267 [2024-07-22 10:50:20.999471] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:13.267 [2024-07-22 10:50:20.999540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:13.267 [2024-07-22 10:50:21.117733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:13.267 [2024-07-22 10:50:21.140251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:13.267 [2024-07-22 10:50:21.182548] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:13.267 [2024-07-22 10:50:21.182596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:13.267 [2024-07-22 10:50:21.182606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:13.267 [2024-07-22 10:50:21.182614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:13.267 [2024-07-22 10:50:21.182620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:13.267 [2024-07-22 10:50:21.182644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:14.204 10:50:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:14.204 [2024-07-22 10:50:22.069803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:14.204 ************************************ 00:55:14.204 START TEST lvs_grow_clean 00:55:14.204 ************************************ 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:14.204 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:14.463 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:55:14.463 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:55:14.722 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:14.722 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:14.722 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:55:14.980 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:55:14.980 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:55:14.980 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 lvol 150 00:55:15.252 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07 00:55:15.252 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:15.252 10:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:55:15.252 [2024-07-22 10:50:23.111492] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:55:15.252 [2024-07-22 10:50:23.111550] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:55:15.252 true 00:55:15.252 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:15.252 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:55:15.526 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:55:15.526 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:55:15.784 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07 00:55:15.784 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:16.043 [2024-07-22 10:50:23.882688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:16.043 10:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90282 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90282 /var/tmp/bdevperf.sock 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 90282 ']' 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:16.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:16.302 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:55:16.302 [2024-07-22 10:50:24.117643] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:16.302 [2024-07-22 10:50:24.117713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90282 ] 00:55:16.561 [2024-07-22 10:50:24.235368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:16.561 [2024-07-22 10:50:24.260170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:16.561 [2024-07-22 10:50:24.306176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:55:17.129 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:17.129 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:55:17.129 10:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:55:17.387 Nvme0n1 00:55:17.387 10:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:55:17.646 [ 00:55:17.646 { 00:55:17.646 "aliases": [ 00:55:17.646 "1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07" 00:55:17.646 ], 00:55:17.646 "assigned_rate_limits": { 00:55:17.646 "r_mbytes_per_sec": 0, 00:55:17.646 "rw_ios_per_sec": 0, 00:55:17.646 "rw_mbytes_per_sec": 0, 00:55:17.646 "w_mbytes_per_sec": 0 00:55:17.646 }, 00:55:17.646 "block_size": 4096, 00:55:17.646 "claimed": false, 00:55:17.646 "driver_specific": { 00:55:17.646 "mp_policy": "active_passive", 00:55:17.646 "nvme": [ 00:55:17.646 { 00:55:17.646 "ctrlr_data": { 00:55:17.646 "ana_reporting": false, 00:55:17.646 "cntlid": 1, 00:55:17.646 "firmware_revision": "24.09", 00:55:17.646 "model_number": "SPDK bdev Controller", 00:55:17.646 "multi_ctrlr": true, 00:55:17.646 "oacs": { 00:55:17.646 "firmware": 0, 00:55:17.646 "format": 0, 00:55:17.646 "ns_manage": 0, 00:55:17.646 "security": 0 00:55:17.646 }, 00:55:17.646 "serial_number": "SPDK0", 00:55:17.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:17.646 "vendor_id": "0x8086" 00:55:17.646 }, 00:55:17.646 "ns_data": { 00:55:17.646 "can_share": true, 00:55:17.646 "id": 1 00:55:17.646 }, 00:55:17.646 "trid": { 00:55:17.646 "adrfam": "IPv4", 00:55:17.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:17.646 "traddr": "10.0.0.2", 00:55:17.646 "trsvcid": "4420", 00:55:17.646 "trtype": "TCP" 00:55:17.646 }, 00:55:17.646 "vs": { 00:55:17.646 "nvme_version": "1.3" 00:55:17.646 } 00:55:17.646 } 00:55:17.646 ] 00:55:17.646 }, 00:55:17.646 "memory_domains": [ 00:55:17.646 { 00:55:17.646 "dma_device_id": "system", 00:55:17.646 "dma_device_type": 1 00:55:17.646 } 00:55:17.646 ], 00:55:17.646 "name": "Nvme0n1", 00:55:17.646 "num_blocks": 38912, 00:55:17.646 "product_name": "NVMe disk", 00:55:17.646 "supported_io_types": { 00:55:17.646 "abort": true, 00:55:17.646 "compare": true, 00:55:17.646 "compare_and_write": true, 00:55:17.646 "copy": true, 00:55:17.646 "flush": true, 00:55:17.646 "get_zone_info": false, 00:55:17.646 "nvme_admin": true, 00:55:17.646 "nvme_io": true, 00:55:17.646 "nvme_io_md": false, 00:55:17.646 "nvme_iov_md": false, 00:55:17.646 "read": true, 00:55:17.646 "reset": true, 00:55:17.646 "seek_data": false, 00:55:17.646 "seek_hole": false, 00:55:17.646 "unmap": true, 00:55:17.646 "write": true, 00:55:17.646 "write_zeroes": true, 00:55:17.646 "zcopy": false, 00:55:17.646 "zone_append": false, 00:55:17.646 "zone_management": false 00:55:17.646 }, 00:55:17.646 "uuid": "1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07", 00:55:17.646 "zoned": false 00:55:17.646 } 00:55:17.646 ] 00:55:17.646 10:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:17.646 10:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90329 00:55:17.646 10:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:55:17.646 Running I/O for 10 seconds... 00:55:19.021 Latency(us) 00:55:19.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:19.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:19.021 Nvme0n1 : 1.00 11835.00 46.23 0.00 0.00 0.00 0.00 0.00 00:55:19.021 =================================================================================================================== 00:55:19.021 Total : 11835.00 46.23 0.00 0.00 0.00 0.00 0.00 00:55:19.021 00:55:19.588 10:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:19.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:19.588 Nvme0n1 : 2.00 12067.50 47.14 0.00 0.00 0.00 0.00 0.00 00:55:19.588 =================================================================================================================== 00:55:19.588 Total : 12067.50 47.14 0.00 0.00 0.00 0.00 0.00 00:55:19.588 00:55:19.845 true 00:55:19.845 10:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:19.846 10:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:55:20.103 10:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:55:20.103 10:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:55:20.103 10:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 90329 00:55:20.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:20.669 Nvme0n1 : 3.00 12050.00 47.07 0.00 0.00 0.00 0.00 0.00 00:55:20.670 =================================================================================================================== 00:55:20.670 Total : 12050.00 47.07 0.00 0.00 0.00 0.00 0.00 00:55:20.670 00:55:21.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:21.604 Nvme0n1 : 4.00 11975.00 46.78 0.00 0.00 0.00 0.00 0.00 00:55:21.604 =================================================================================================================== 00:55:21.604 Total : 11975.00 46.78 0.00 0.00 0.00 0.00 0.00 00:55:21.604 00:55:22.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:22.979 Nvme0n1 : 5.00 11931.60 46.61 0.00 0.00 0.00 0.00 0.00 00:55:22.979 =================================================================================================================== 00:55:22.979 Total : 11931.60 46.61 0.00 0.00 0.00 0.00 0.00 00:55:22.979 00:55:23.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:23.915 Nvme0n1 : 6.00 11891.17 46.45 0.00 0.00 0.00 0.00 0.00 00:55:23.915 =================================================================================================================== 00:55:23.915 Total : 11891.17 46.45 0.00 0.00 0.00 0.00 0.00 00:55:23.915 00:55:24.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:24.892 Nvme0n1 : 7.00 11850.43 46.29 0.00 0.00 0.00 0.00 0.00 00:55:24.892 =================================================================================================================== 00:55:24.892 Total : 11850.43 46.29 0.00 0.00 0.00 0.00 0.00 00:55:24.892 00:55:25.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:25.827 Nvme0n1 : 8.00 11834.62 46.23 0.00 0.00 0.00 0.00 0.00 00:55:25.827 =================================================================================================================== 00:55:25.827 Total : 11834.62 46.23 0.00 0.00 0.00 0.00 0.00 00:55:25.827 00:55:26.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:26.764 Nvme0n1 : 9.00 11810.44 46.13 0.00 0.00 0.00 0.00 0.00 00:55:26.764 =================================================================================================================== 00:55:26.764 Total : 11810.44 46.13 0.00 0.00 0.00 0.00 0.00 00:55:26.764 00:55:27.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:27.719 Nvme0n1 : 10.00 11787.90 46.05 0.00 0.00 0.00 0.00 0.00 00:55:27.719 =================================================================================================================== 00:55:27.719 Total : 11787.90 46.05 0.00 0.00 0.00 0.00 0.00 00:55:27.719 00:55:27.719 00:55:27.719 Latency(us) 00:55:27.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:27.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:27.719 Nvme0n1 : 10.01 11794.07 46.07 0.00 0.00 10847.89 4921.78 30741.38 00:55:27.719 =================================================================================================================== 00:55:27.719 Total : 11794.07 46.07 0.00 0.00 10847.89 4921.78 30741.38 00:55:27.719 0 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90282 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 90282 ']' 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 90282 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90282 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:55:27.719 killing process with pid 90282 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90282' 00:55:27.719 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 90282 00:55:27.719 Received shutdown signal, test time was about 10.000000 seconds 00:55:27.719 00:55:27.720 Latency(us) 00:55:27.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:27.720 =================================================================================================================== 00:55:27.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:27.720 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 90282 00:55:27.979 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:55:28.238 10:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:28.238 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:28.238 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:55:28.498 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:55:28.498 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:55:28.498 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:55:28.757 [2024-07-22 10:50:36.537030] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:55:28.757 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:29.017 2024/07/22 10:50:36 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8c13ec14-5e42-40c8-b84c-2fc40d9c7b18], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:55:29.017 request: 00:55:29.017 { 00:55:29.017 "method": "bdev_lvol_get_lvstores", 00:55:29.017 "params": { 00:55:29.017 "uuid": "8c13ec14-5e42-40c8-b84c-2fc40d9c7b18" 00:55:29.017 } 00:55:29.017 } 00:55:29.017 Got JSON-RPC error response 00:55:29.017 GoRPCClient: error on JSON-RPC call 00:55:29.017 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:55:29.017 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:29.017 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:29.017 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:29.017 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:29.277 aio_bdev 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:55:29.277 10:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:55:29.277 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07 -t 2000 00:55:29.536 [ 00:55:29.536 { 00:55:29.536 "aliases": [ 00:55:29.536 "lvs/lvol" 00:55:29.536 ], 00:55:29.536 "assigned_rate_limits": { 00:55:29.536 "r_mbytes_per_sec": 0, 00:55:29.536 "rw_ios_per_sec": 0, 00:55:29.536 "rw_mbytes_per_sec": 0, 00:55:29.536 "w_mbytes_per_sec": 0 00:55:29.536 }, 00:55:29.536 "block_size": 4096, 00:55:29.536 "claimed": false, 00:55:29.536 "driver_specific": { 00:55:29.536 "lvol": { 00:55:29.536 "base_bdev": "aio_bdev", 00:55:29.536 "clone": false, 00:55:29.536 "esnap_clone": false, 00:55:29.536 "lvol_store_uuid": "8c13ec14-5e42-40c8-b84c-2fc40d9c7b18", 00:55:29.536 "num_allocated_clusters": 38, 00:55:29.536 "snapshot": false, 00:55:29.536 "thin_provision": false 00:55:29.536 } 00:55:29.536 }, 00:55:29.536 "name": "1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07", 00:55:29.536 "num_blocks": 38912, 00:55:29.536 "product_name": "Logical Volume", 00:55:29.536 "supported_io_types": { 00:55:29.536 "abort": false, 00:55:29.536 "compare": false, 00:55:29.536 "compare_and_write": false, 00:55:29.536 "copy": false, 00:55:29.536 "flush": false, 00:55:29.536 "get_zone_info": false, 00:55:29.536 "nvme_admin": false, 00:55:29.536 "nvme_io": false, 00:55:29.536 "nvme_io_md": false, 00:55:29.536 "nvme_iov_md": false, 00:55:29.536 "read": true, 00:55:29.536 "reset": true, 00:55:29.536 "seek_data": true, 00:55:29.536 "seek_hole": true, 00:55:29.536 "unmap": true, 00:55:29.536 "write": true, 00:55:29.536 "write_zeroes": true, 00:55:29.536 "zcopy": false, 00:55:29.536 "zone_append": false, 00:55:29.536 "zone_management": false 00:55:29.536 }, 00:55:29.536 "uuid": "1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07", 00:55:29.536 "zoned": false 00:55:29.536 } 00:55:29.536 ] 00:55:29.536 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:55:29.536 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:55:29.536 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:29.794 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:55:29.794 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:29.794 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:55:30.054 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:55:30.054 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1c3b56b6-f9a5-4f23-aa3a-2d8d0a671c07 00:55:30.054 10:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c13ec14-5e42-40c8-b84c-2fc40d9c7b18 00:55:30.311 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:55:30.570 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:31.137 ************************************ 00:55:31.137 END TEST lvs_grow_clean 00:55:31.137 ************************************ 00:55:31.137 00:55:31.137 real 0m16.672s 00:55:31.137 user 0m15.141s 00:55:31.137 sys 0m2.671s 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:31.137 ************************************ 00:55:31.137 START TEST lvs_grow_dirty 00:55:31.137 ************************************ 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:31.137 10:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:31.397 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:55:31.397 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:55:31.397 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:31.397 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:31.397 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:55:31.655 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:55:31.655 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:55:31.655 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2ee04293-4fab-4320-8c28-c8c43b927a8d lvol 150 00:55:31.914 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:31.914 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:31.914 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:55:32.172 [2024-07-22 10:50:39.853471] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:55:32.172 [2024-07-22 10:50:39.853535] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:55:32.172 true 00:55:32.172 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:32.172 10:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:55:32.172 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:55:32.172 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:55:32.431 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:32.690 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:32.690 [2024-07-22 10:50:40.588686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:32.690 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90705 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90705 /var/tmp/bdevperf.sock 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90705 ']' 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:32.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:32.949 10:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:32.949 [2024-07-22 10:50:40.857854] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:32.949 [2024-07-22 10:50:40.857927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90705 ] 00:55:33.208 [2024-07-22 10:50:40.975471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:33.208 [2024-07-22 10:50:40.998483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:33.208 [2024-07-22 10:50:41.046101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:55:33.776 10:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:33.776 10:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:55:33.776 10:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:55:34.051 Nvme0n1 00:55:34.051 10:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:55:34.310 [ 00:55:34.310 { 00:55:34.310 "aliases": [ 00:55:34.310 "8f18f5ae-b994-40f3-bbe0-9149fdc91ab7" 00:55:34.310 ], 00:55:34.310 "assigned_rate_limits": { 00:55:34.310 "r_mbytes_per_sec": 0, 00:55:34.310 "rw_ios_per_sec": 0, 00:55:34.310 "rw_mbytes_per_sec": 0, 00:55:34.310 "w_mbytes_per_sec": 0 00:55:34.310 }, 00:55:34.310 "block_size": 4096, 00:55:34.310 "claimed": false, 00:55:34.310 "driver_specific": { 00:55:34.310 "mp_policy": "active_passive", 00:55:34.310 "nvme": [ 00:55:34.310 { 00:55:34.310 "ctrlr_data": { 00:55:34.310 "ana_reporting": false, 00:55:34.310 "cntlid": 1, 00:55:34.310 "firmware_revision": "24.09", 00:55:34.310 "model_number": "SPDK bdev Controller", 00:55:34.310 "multi_ctrlr": true, 00:55:34.310 "oacs": { 00:55:34.310 "firmware": 0, 00:55:34.310 "format": 0, 00:55:34.310 "ns_manage": 0, 00:55:34.310 "security": 0 00:55:34.310 }, 00:55:34.310 "serial_number": "SPDK0", 00:55:34.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:34.310 "vendor_id": "0x8086" 00:55:34.310 }, 00:55:34.310 "ns_data": { 00:55:34.310 "can_share": true, 00:55:34.310 "id": 1 00:55:34.310 }, 00:55:34.310 "trid": { 00:55:34.310 "adrfam": "IPv4", 00:55:34.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:34.310 "traddr": "10.0.0.2", 00:55:34.310 "trsvcid": "4420", 00:55:34.310 "trtype": "TCP" 00:55:34.310 }, 00:55:34.310 "vs": { 00:55:34.310 "nvme_version": "1.3" 00:55:34.310 } 00:55:34.310 } 00:55:34.310 ] 00:55:34.310 }, 00:55:34.311 "memory_domains": [ 00:55:34.311 { 00:55:34.311 "dma_device_id": "system", 00:55:34.311 "dma_device_type": 1 00:55:34.311 } 00:55:34.311 ], 00:55:34.311 "name": "Nvme0n1", 00:55:34.311 "num_blocks": 38912, 00:55:34.311 "product_name": "NVMe disk", 00:55:34.311 "supported_io_types": { 00:55:34.311 "abort": true, 00:55:34.311 "compare": true, 00:55:34.311 "compare_and_write": true, 00:55:34.311 "copy": true, 00:55:34.311 "flush": true, 00:55:34.311 "get_zone_info": false, 00:55:34.311 "nvme_admin": true, 00:55:34.311 "nvme_io": true, 00:55:34.311 "nvme_io_md": false, 00:55:34.311 "nvme_iov_md": false, 00:55:34.311 "read": true, 00:55:34.311 "reset": true, 00:55:34.311 "seek_data": false, 00:55:34.311 "seek_hole": false, 00:55:34.311 "unmap": true, 00:55:34.311 "write": true, 00:55:34.311 "write_zeroes": true, 00:55:34.311 "zcopy": false, 00:55:34.311 "zone_append": false, 00:55:34.311 "zone_management": false 00:55:34.311 }, 00:55:34.311 "uuid": "8f18f5ae-b994-40f3-bbe0-9149fdc91ab7", 00:55:34.311 "zoned": false 00:55:34.311 } 00:55:34.311 ] 00:55:34.311 10:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:34.311 10:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90754 00:55:34.311 10:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:55:34.311 Running I/O for 10 seconds... 00:55:35.687 Latency(us) 00:55:35.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:35.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:35.687 Nvme0n1 : 1.00 12352.00 48.25 0.00 0.00 0.00 0.00 0.00 00:55:35.687 =================================================================================================================== 00:55:35.687 Total : 12352.00 48.25 0.00 0.00 0.00 0.00 0.00 00:55:35.687 00:55:36.327 10:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:36.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:36.327 Nvme0n1 : 2.00 11485.00 44.86 0.00 0.00 0.00 0.00 0.00 00:55:36.327 =================================================================================================================== 00:55:36.327 Total : 11485.00 44.86 0.00 0.00 0.00 0.00 0.00 00:55:36.327 00:55:36.585 true 00:55:36.585 10:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:36.585 10:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:55:36.844 10:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:55:36.844 10:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:55:36.844 10:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 90754 00:55:37.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:37.411 Nvme0n1 : 3.00 10980.67 42.89 0.00 0.00 0.00 0.00 0.00 00:55:37.411 =================================================================================================================== 00:55:37.411 Total : 10980.67 42.89 0.00 0.00 0.00 0.00 0.00 00:55:37.411 00:55:38.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:38.346 Nvme0n1 : 4.00 11220.00 43.83 0.00 0.00 0.00 0.00 0.00 00:55:38.346 =================================================================================================================== 00:55:38.347 Total : 11220.00 43.83 0.00 0.00 0.00 0.00 0.00 00:55:38.347 00:55:39.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:39.722 Nvme0n1 : 5.00 11179.00 43.67 0.00 0.00 0.00 0.00 0.00 00:55:39.722 =================================================================================================================== 00:55:39.722 Total : 11179.00 43.67 0.00 0.00 0.00 0.00 0.00 00:55:39.722 00:55:40.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:40.290 Nvme0n1 : 6.00 11230.00 43.87 0.00 0.00 0.00 0.00 0.00 00:55:40.290 =================================================================================================================== 00:55:40.290 Total : 11230.00 43.87 0.00 0.00 0.00 0.00 0.00 00:55:40.290 00:55:41.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:41.663 Nvme0n1 : 7.00 11296.43 44.13 0.00 0.00 0.00 0.00 0.00 00:55:41.663 =================================================================================================================== 00:55:41.663 Total : 11296.43 44.13 0.00 0.00 0.00 0.00 0.00 00:55:41.663 00:55:42.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:42.602 Nvme0n1 : 8.00 11330.88 44.26 0.00 0.00 0.00 0.00 0.00 00:55:42.602 =================================================================================================================== 00:55:42.602 Total : 11330.88 44.26 0.00 0.00 0.00 0.00 0.00 00:55:42.602 00:55:43.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:43.552 Nvme0n1 : 9.00 11320.89 44.22 0.00 0.00 0.00 0.00 0.00 00:55:43.552 =================================================================================================================== 00:55:43.552 Total : 11320.89 44.22 0.00 0.00 0.00 0.00 0.00 00:55:43.552 00:55:44.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:44.491 Nvme0n1 : 10.00 11188.00 43.70 0.00 0.00 0.00 0.00 0.00 00:55:44.491 =================================================================================================================== 00:55:44.491 Total : 11188.00 43.70 0.00 0.00 0.00 0.00 0.00 00:55:44.491 00:55:44.491 00:55:44.491 Latency(us) 00:55:44.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:44.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:55:44.491 Nvme0n1 : 10.01 11192.90 43.72 0.00 0.00 11431.70 3447.88 328469.59 00:55:44.491 =================================================================================================================== 00:55:44.491 Total : 11192.90 43.72 0.00 0.00 11431.70 3447.88 328469.59 00:55:44.491 0 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90705 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 90705 ']' 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 90705 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90705 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:55:44.491 killing process with pid 90705 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90705' 00:55:44.491 Received shutdown signal, test time was about 10.000000 seconds 00:55:44.491 00:55:44.491 Latency(us) 00:55:44.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:44.491 =================================================================================================================== 00:55:44.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 90705 00:55:44.491 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 90705 00:55:44.751 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:55:44.751 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:45.010 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:45.010 10:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 90131 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 90131 00:55:45.268 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 90131 Killed "${NVMF_APP[@]}" "$@" 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=90915 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 90915 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90915 ']' 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:45.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:45.268 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:45.268 [2024-07-22 10:50:53.148329] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:45.268 [2024-07-22 10:50:53.148403] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:45.526 [2024-07-22 10:50:53.268795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:45.526 [2024-07-22 10:50:53.293180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:45.526 [2024-07-22 10:50:53.335131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:45.526 [2024-07-22 10:50:53.335183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:45.526 [2024-07-22 10:50:53.335209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:45.526 [2024-07-22 10:50:53.335217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:45.526 [2024-07-22 10:50:53.335224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:45.526 [2024-07-22 10:50:53.335254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:55:46.093 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:46.093 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:55:46.093 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:46.093 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:55:46.093 10:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:46.352 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:46.352 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:46.352 [2024-07-22 10:50:54.237227] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:55:46.352 [2024-07-22 10:50:54.237711] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:55:46.352 [2024-07-22 10:50:54.237886] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:55:46.611 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 -t 2000 00:55:46.870 [ 00:55:46.870 { 00:55:46.870 "aliases": [ 00:55:46.870 "lvs/lvol" 00:55:46.870 ], 00:55:46.870 "assigned_rate_limits": { 00:55:46.870 "r_mbytes_per_sec": 0, 00:55:46.870 "rw_ios_per_sec": 0, 00:55:46.870 "rw_mbytes_per_sec": 0, 00:55:46.870 "w_mbytes_per_sec": 0 00:55:46.870 }, 00:55:46.870 "block_size": 4096, 00:55:46.870 "claimed": false, 00:55:46.870 "driver_specific": { 00:55:46.870 "lvol": { 00:55:46.870 "base_bdev": "aio_bdev", 00:55:46.870 "clone": false, 00:55:46.870 "esnap_clone": false, 00:55:46.870 "lvol_store_uuid": "2ee04293-4fab-4320-8c28-c8c43b927a8d", 00:55:46.870 "num_allocated_clusters": 38, 00:55:46.870 "snapshot": false, 00:55:46.870 "thin_provision": false 00:55:46.870 } 00:55:46.870 }, 00:55:46.870 "name": "8f18f5ae-b994-40f3-bbe0-9149fdc91ab7", 00:55:46.870 "num_blocks": 38912, 00:55:46.870 "product_name": "Logical Volume", 00:55:46.870 "supported_io_types": { 00:55:46.870 "abort": false, 00:55:46.870 "compare": false, 00:55:46.870 "compare_and_write": false, 00:55:46.870 "copy": false, 00:55:46.870 "flush": false, 00:55:46.870 "get_zone_info": false, 00:55:46.870 "nvme_admin": false, 00:55:46.870 "nvme_io": false, 00:55:46.870 "nvme_io_md": false, 00:55:46.870 "nvme_iov_md": false, 00:55:46.870 "read": true, 00:55:46.870 "reset": true, 00:55:46.870 "seek_data": true, 00:55:46.870 "seek_hole": true, 00:55:46.870 "unmap": true, 00:55:46.870 "write": true, 00:55:46.870 "write_zeroes": true, 00:55:46.870 "zcopy": false, 00:55:46.870 "zone_append": false, 00:55:46.870 "zone_management": false 00:55:46.870 }, 00:55:46.870 "uuid": "8f18f5ae-b994-40f3-bbe0-9149fdc91ab7", 00:55:46.870 "zoned": false 00:55:46.870 } 00:55:46.870 ] 00:55:46.870 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:55:46.870 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:46.870 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:55:47.129 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:55:47.129 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:47.129 10:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:55:47.389 [2024-07-22 10:50:55.257100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:55:47.389 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:47.648 2024/07/22 10:50:55 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:2ee04293-4fab-4320-8c28-c8c43b927a8d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:55:47.648 request: 00:55:47.648 { 00:55:47.648 "method": "bdev_lvol_get_lvstores", 00:55:47.648 "params": { 00:55:47.648 "uuid": "2ee04293-4fab-4320-8c28-c8c43b927a8d" 00:55:47.648 } 00:55:47.648 } 00:55:47.648 Got JSON-RPC error response 00:55:47.648 GoRPCClient: error on JSON-RPC call 00:55:47.648 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:55:47.648 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:47.648 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:47.648 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:47.648 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:47.907 aio_bdev 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:55:47.907 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:55:48.165 10:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 -t 2000 00:55:48.166 [ 00:55:48.166 { 00:55:48.166 "aliases": [ 00:55:48.166 "lvs/lvol" 00:55:48.166 ], 00:55:48.166 "assigned_rate_limits": { 00:55:48.166 "r_mbytes_per_sec": 0, 00:55:48.166 "rw_ios_per_sec": 0, 00:55:48.166 "rw_mbytes_per_sec": 0, 00:55:48.166 "w_mbytes_per_sec": 0 00:55:48.166 }, 00:55:48.166 "block_size": 4096, 00:55:48.166 "claimed": false, 00:55:48.166 "driver_specific": { 00:55:48.166 "lvol": { 00:55:48.166 "base_bdev": "aio_bdev", 00:55:48.166 "clone": false, 00:55:48.166 "esnap_clone": false, 00:55:48.166 "lvol_store_uuid": "2ee04293-4fab-4320-8c28-c8c43b927a8d", 00:55:48.166 "num_allocated_clusters": 38, 00:55:48.166 "snapshot": false, 00:55:48.166 "thin_provision": false 00:55:48.166 } 00:55:48.166 }, 00:55:48.166 "name": "8f18f5ae-b994-40f3-bbe0-9149fdc91ab7", 00:55:48.166 "num_blocks": 38912, 00:55:48.166 "product_name": "Logical Volume", 00:55:48.166 "supported_io_types": { 00:55:48.166 "abort": false, 00:55:48.166 "compare": false, 00:55:48.166 "compare_and_write": false, 00:55:48.166 "copy": false, 00:55:48.166 "flush": false, 00:55:48.166 "get_zone_info": false, 00:55:48.166 "nvme_admin": false, 00:55:48.166 "nvme_io": false, 00:55:48.166 "nvme_io_md": false, 00:55:48.166 "nvme_iov_md": false, 00:55:48.166 "read": true, 00:55:48.166 "reset": true, 00:55:48.166 "seek_data": true, 00:55:48.166 "seek_hole": true, 00:55:48.166 "unmap": true, 00:55:48.166 "write": true, 00:55:48.166 "write_zeroes": true, 00:55:48.166 "zcopy": false, 00:55:48.166 "zone_append": false, 00:55:48.166 "zone_management": false 00:55:48.166 }, 00:55:48.166 "uuid": "8f18f5ae-b994-40f3-bbe0-9149fdc91ab7", 00:55:48.166 "zoned": false 00:55:48.166 } 00:55:48.166 ] 00:55:48.166 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:55:48.166 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:48.166 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:55:48.424 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:55:48.424 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:48.424 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:55:48.683 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:55:48.683 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8f18f5ae-b994-40f3-bbe0-9149fdc91ab7 00:55:48.942 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ee04293-4fab-4320-8c28-c8c43b927a8d 00:55:49.200 10:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:55:49.200 10:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:49.767 00:55:49.767 real 0m18.669s 00:55:49.767 user 0m37.338s 00:55:49.767 sys 0m7.463s 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:49.767 ************************************ 00:55:49.767 END TEST lvs_grow_dirty 00:55:49.767 ************************************ 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:55:49.767 nvmf_trace.0 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:49.767 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:50.025 rmmod nvme_tcp 00:55:50.025 rmmod nvme_fabrics 00:55:50.025 rmmod nvme_keyring 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 90915 ']' 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 90915 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 90915 ']' 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 90915 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90915 00:55:50.025 killing process with pid 90915 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90915' 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 90915 00:55:50.025 10:50:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 90915 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:55:50.282 ************************************ 00:55:50.282 END TEST nvmf_lvs_grow 00:55:50.282 ************************************ 00:55:50.282 00:55:50.282 real 0m37.776s 00:55:50.282 user 0m57.930s 00:55:50.282 sys 0m10.999s 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:55:50.282 10:50:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:50.282 10:50:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:55:50.282 10:50:58 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:55:50.282 10:50:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:55:50.282 10:50:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:55:50.282 10:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:55:50.282 ************************************ 00:55:50.282 START TEST nvmf_bdev_io_wait 00:55:50.282 ************************************ 00:55:50.282 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:55:50.540 * Looking for test storage... 00:55:50.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:55:50.540 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:55:50.541 Cannot find device "nvmf_tgt_br" 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:55:50.541 Cannot find device "nvmf_tgt_br2" 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:55:50.541 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:55:50.799 Cannot find device "nvmf_tgt_br" 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:55:50.799 Cannot find device "nvmf_tgt_br2" 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:50.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:50.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:50.799 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:55:50.800 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:55:51.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:51.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:55:51.058 00:55:51.058 --- 10.0.0.2 ping statistics --- 00:55:51.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:51.058 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:55:51.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:51.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:55:51.058 00:55:51.058 --- 10.0.0.3 ping statistics --- 00:55:51.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:51.058 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:51.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:51.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:55:51.058 00:55:51.058 --- 10.0.0.1 ping statistics --- 00:55:51.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:51.058 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=91323 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 91323 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 91323 ']' 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:51.058 10:50:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:55:51.058 [2024-07-22 10:50:58.910456] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:51.058 [2024-07-22 10:50:58.910549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:51.317 [2024-07-22 10:50:59.029717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:51.317 [2024-07-22 10:50:59.054588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:55:51.317 [2024-07-22 10:50:59.100411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:51.317 [2024-07-22 10:50:59.100480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:51.317 [2024-07-22 10:50:59.100490] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:51.317 [2024-07-22 10:50:59.100498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:51.317 [2024-07-22 10:50:59.100505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:51.317 [2024-07-22 10:50:59.101020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:55:51.317 [2024-07-22 10:50:59.101199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:55:51.317 [2024-07-22 10:50:59.101623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:55:51.317 [2024-07-22 10:50:59.101624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:51.884 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:52.143 [2024-07-22 10:50:59.869628] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:52.143 Malloc0 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:52.143 [2024-07-22 10:50:59.925169] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:52.143 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=91376 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=91377 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=91379 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=91381 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:55:52.144 { 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme$subsystem", 00:55:52.144 "trtype": "$TEST_TRANSPORT", 00:55:52.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "$NVMF_PORT", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:52.144 "hdgst": ${hdgst:-false}, 00:55:52.144 "ddgst": ${ddgst:-false} 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 } 00:55:52.144 EOF 00:55:52.144 )") 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:55:52.144 { 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme$subsystem", 00:55:52.144 "trtype": "$TEST_TRANSPORT", 00:55:52.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "$NVMF_PORT", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:52.144 "hdgst": ${hdgst:-false}, 00:55:52.144 "ddgst": ${ddgst:-false} 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 } 00:55:52.144 EOF 00:55:52.144 )") 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:55:52.144 { 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme$subsystem", 00:55:52.144 "trtype": "$TEST_TRANSPORT", 00:55:52.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "$NVMF_PORT", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:52.144 "hdgst": ${hdgst:-false}, 00:55:52.144 "ddgst": ${ddgst:-false} 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 } 00:55:52.144 EOF 00:55:52.144 )") 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:55:52.144 { 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme$subsystem", 00:55:52.144 "trtype": "$TEST_TRANSPORT", 00:55:52.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "$NVMF_PORT", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:52.144 "hdgst": ${hdgst:-false}, 00:55:52.144 "ddgst": ${ddgst:-false} 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 } 00:55:52.144 EOF 00:55:52.144 )") 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme1", 00:55:52.144 "trtype": "tcp", 00:55:52.144 "traddr": "10.0.0.2", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "4420", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:52.144 "hdgst": false, 00:55:52.144 "ddgst": false 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 }' 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme1", 00:55:52.144 "trtype": "tcp", 00:55:52.144 "traddr": "10.0.0.2", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "4420", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:52.144 "hdgst": false, 00:55:52.144 "ddgst": false 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 }' 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme1", 00:55:52.144 "trtype": "tcp", 00:55:52.144 "traddr": "10.0.0.2", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "4420", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:52.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:52.144 "hdgst": false, 00:55:52.144 "ddgst": false 00:55:52.144 }, 00:55:52.144 "method": "bdev_nvme_attach_controller" 00:55:52.144 }' 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:55:52.144 10:50:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:55:52.144 "params": { 00:55:52.144 "name": "Nvme1", 00:55:52.144 "trtype": "tcp", 00:55:52.144 "traddr": "10.0.0.2", 00:55:52.144 "adrfam": "ipv4", 00:55:52.144 "trsvcid": "4420", 00:55:52.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:52.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:52.145 "hdgst": false, 00:55:52.145 "ddgst": false 00:55:52.145 }, 00:55:52.145 "method": "bdev_nvme_attach_controller" 00:55:52.145 }' 00:55:52.145 [2024-07-22 10:50:59.983197] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:52.145 [2024-07-22 10:50:59.983284] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:55:52.145 [2024-07-22 10:50:59.992577] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:52.145 [2024-07-22 10:50:59.992639] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:55:52.145 [2024-07-22 10:50:59.993393] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:52.145 [2024-07-22 10:50:59.993784] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:55:52.145 10:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 91376 00:55:52.145 [2024-07-22 10:51:00.010101] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:52.145 [2024-07-22 10:51:00.010174] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:55:52.403 [2024-07-22 10:51:00.150622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:52.403 [2024-07-22 10:51:00.170027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:52.403 [2024-07-22 10:51:00.198718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:55:52.403 [2024-07-22 10:51:00.211739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:52.403 [2024-07-22 10:51:00.237478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:52.403 [2024-07-22 10:51:00.266077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:55:52.403 [2024-07-22 10:51:00.275751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:52.403 [2024-07-22 10:51:00.318899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:52.661 [2024-07-22 10:51:00.342035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:52.661 [2024-07-22 10:51:00.363711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:55:52.661 [2024-07-22 10:51:00.367353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:52.661 Running I/O for 1 seconds... 00:55:52.661 Running I/O for 1 seconds... 00:55:52.661 [2024-07-22 10:51:00.394053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:55:52.661 Running I/O for 1 seconds... 00:55:52.661 Running I/O for 1 seconds... 00:55:53.615 00:55:53.615 Latency(us) 00:55:53.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:53.615 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:55:53.615 Nvme1n1 : 1.00 250839.01 979.84 0.00 0.00 508.56 228.65 815.91 00:55:53.615 =================================================================================================================== 00:55:53.615 Total : 250839.01 979.84 0.00 0.00 508.56 228.65 815.91 00:55:53.615 00:55:53.615 Latency(us) 00:55:53.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:53.615 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:55:53.615 Nvme1n1 : 1.01 11874.34 46.38 0.00 0.00 10736.20 7158.95 17897.38 00:55:53.615 =================================================================================================================== 00:55:53.615 Total : 11874.34 46.38 0.00 0.00 10736.20 7158.95 17897.38 00:55:53.615 00:55:53.615 Latency(us) 00:55:53.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:53.615 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:55:53.615 Nvme1n1 : 1.01 9996.45 39.05 0.00 0.00 12758.34 6553.60 22634.92 00:55:53.615 =================================================================================================================== 00:55:53.615 Total : 9996.45 39.05 0.00 0.00 12758.34 6553.60 22634.92 00:55:53.615 00:55:53.615 Latency(us) 00:55:53.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:53.615 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:55:53.615 Nvme1n1 : 1.00 10703.02 41.81 0.00 0.00 11925.79 1881.86 17160.43 00:55:53.615 =================================================================================================================== 00:55:53.615 Total : 10703.02 41.81 0.00 0.00 11925.79 1881.86 17160.43 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 91377 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 91379 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 91381 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:53.873 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:54.131 rmmod nvme_tcp 00:55:54.131 rmmod nvme_fabrics 00:55:54.131 rmmod nvme_keyring 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 91323 ']' 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 91323 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 91323 ']' 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 91323 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91323 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:55:54.131 killing process with pid 91323 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91323' 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 91323 00:55:54.131 10:51:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 91323 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:54.131 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:54.390 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:55:54.390 00:55:54.390 real 0m3.904s 00:55:54.390 user 0m16.279s 00:55:54.390 sys 0m2.074s 00:55:54.390 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:55:54.390 10:51:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:54.390 ************************************ 00:55:54.390 END TEST nvmf_bdev_io_wait 00:55:54.390 ************************************ 00:55:54.390 10:51:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:55:54.390 10:51:02 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:55:54.390 10:51:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:55:54.390 10:51:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:55:54.390 10:51:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:55:54.390 ************************************ 00:55:54.390 START TEST nvmf_queue_depth 00:55:54.390 ************************************ 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:55:54.390 * Looking for test storage... 00:55:54.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:54.390 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:54.649 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:54.649 10:51:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:54.649 10:51:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:54.649 10:51:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:55:54.650 Cannot find device "nvmf_tgt_br" 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:55:54.650 Cannot find device "nvmf_tgt_br2" 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:55:54.650 Cannot find device "nvmf_tgt_br" 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:55:54.650 Cannot find device "nvmf_tgt_br2" 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:54.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:54.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:54.650 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:55:54.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:54.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:55:54.908 00:55:54.908 --- 10.0.0.2 ping statistics --- 00:55:54.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:54.908 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:55:54.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:54.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:55:54.908 00:55:54.908 --- 10.0.0.3 ping statistics --- 00:55:54.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:54.908 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:55:54.908 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:54.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:54.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:55:54.908 00:55:54.908 --- 10.0.0.1 ping statistics --- 00:55:54.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:54.908 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=91583 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 91583 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91583 ']' 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:54.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:54.909 10:51:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:55.167 [2024-07-22 10:51:02.853249] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:55.167 [2024-07-22 10:51:02.853341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:55.167 [2024-07-22 10:51:02.970956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:55.167 [2024-07-22 10:51:02.987969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:55.167 [2024-07-22 10:51:03.033734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:55.167 [2024-07-22 10:51:03.033790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:55.167 [2024-07-22 10:51:03.033800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:55.167 [2024-07-22 10:51:03.033809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:55.167 [2024-07-22 10:51:03.033815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:55.167 [2024-07-22 10:51:03.033839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.112 [2024-07-22 10:51:03.750759] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.112 Malloc0 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.112 [2024-07-22 10:51:03.819105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:56.112 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=91633 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 91633 /var/tmp/bdevperf.sock 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91633 ']' 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:55:56.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:55:56.113 10:51:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.113 [2024-07-22 10:51:03.879262] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:55:56.113 [2024-07-22 10:51:03.879344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91633 ] 00:55:56.113 [2024-07-22 10:51:03.997504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:55:56.113 [2024-07-22 10:51:04.021296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:56.371 [2024-07-22 10:51:04.063399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:56.937 NVMe0n1 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:56.937 10:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:57.196 Running I/O for 10 seconds... 00:56:07.172 00:56:07.172 Latency(us) 00:56:07.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:07.172 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:56:07.172 Verification LBA range: start 0x0 length 0x4000 00:56:07.172 NVMe0n1 : 10.07 12377.93 48.35 0.00 0.00 82453.50 19266.00 89276.35 00:56:07.172 =================================================================================================================== 00:56:07.172 Total : 12377.93 48.35 0.00 0.00 82453.50 19266.00 89276.35 00:56:07.172 0 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 91633 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91633 ']' 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91633 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91633 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:56:07.172 killing process with pid 91633 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91633' 00:56:07.172 Received shutdown signal, test time was about 10.000000 seconds 00:56:07.172 00:56:07.172 Latency(us) 00:56:07.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:07.172 =================================================================================================================== 00:56:07.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91633 00:56:07.172 10:51:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91633 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:07.431 rmmod nvme_tcp 00:56:07.431 rmmod nvme_fabrics 00:56:07.431 rmmod nvme_keyring 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 91583 ']' 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 91583 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91583 ']' 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91583 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91583 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:56:07.431 killing process with pid 91583 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91583' 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91583 00:56:07.431 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91583 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:56:07.690 00:56:07.690 real 0m13.370s 00:56:07.690 user 0m22.644s 00:56:07.690 sys 0m2.218s 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:56:07.690 10:51:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:07.690 ************************************ 00:56:07.690 END TEST nvmf_queue_depth 00:56:07.690 ************************************ 00:56:07.690 10:51:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:56:07.690 10:51:15 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:56:07.690 10:51:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:56:07.690 10:51:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:56:07.690 10:51:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:07.949 ************************************ 00:56:07.949 START TEST nvmf_target_multipath 00:56:07.949 ************************************ 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:56:07.949 * Looking for test storage... 00:56:07.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:56:07.949 Cannot find device "nvmf_tgt_br" 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:56:07.949 Cannot find device "nvmf_tgt_br2" 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:56:07.949 Cannot find device "nvmf_tgt_br" 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:56:07.949 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:56:08.208 Cannot find device "nvmf_tgt_br2" 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:08.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:08.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:08.208 10:51:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:08.208 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:56:08.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:08.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:56:08.468 00:56:08.468 --- 10.0.0.2 ping statistics --- 00:56:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:08.468 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:56:08.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:08.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:56:08.468 00:56:08.468 --- 10.0.0.3 ping statistics --- 00:56:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:08.468 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:08.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:08.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:56:08.468 00:56:08.468 --- 10.0.0.1 ping statistics --- 00:56:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:08.468 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=91966 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 91966 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 91966 ']' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:56:08.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:56:08.468 10:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:56:08.468 [2024-07-22 10:51:16.291394] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:56:08.468 [2024-07-22 10:51:16.291469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:08.728 [2024-07-22 10:51:16.413247] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:56:08.728 [2024-07-22 10:51:16.428583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:08.728 [2024-07-22 10:51:16.473802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:08.728 [2024-07-22 10:51:16.473851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:08.728 [2024-07-22 10:51:16.473861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:08.728 [2024-07-22 10:51:16.473869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:08.728 [2024-07-22 10:51:16.473876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:08.728 [2024-07-22 10:51:16.474403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:56:08.728 [2024-07-22 10:51:16.474549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:56:08.728 [2024-07-22 10:51:16.474864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:56:08.728 [2024-07-22 10:51:16.474867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:09.350 10:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:56:09.611 [2024-07-22 10:51:17.368727] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:09.611 10:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:56:09.871 Malloc0 00:56:09.871 10:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:56:10.130 10:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:10.130 10:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:10.389 [2024-07-22 10:51:18.151057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:10.389 10:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:56:10.648 [2024-07-22 10:51:18.330870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:10.648 10:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:56:10.649 10:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:56:10.908 10:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:56:10.908 10:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:56:10.908 10:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:56:10.908 10:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:56:10.908 10:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=92098 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:56:13.479 10:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:56:13.479 [global] 00:56:13.479 thread=1 00:56:13.479 invalidate=1 00:56:13.479 rw=randrw 00:56:13.479 time_based=1 00:56:13.479 runtime=6 00:56:13.479 ioengine=libaio 00:56:13.479 direct=1 00:56:13.479 bs=4096 00:56:13.479 iodepth=128 00:56:13.479 norandommap=0 00:56:13.479 numjobs=1 00:56:13.479 00:56:13.479 verify_dump=1 00:56:13.479 verify_backlog=512 00:56:13.479 verify_state_save=0 00:56:13.479 do_verify=1 00:56:13.479 verify=crc32c-intel 00:56:13.479 [job0] 00:56:13.479 filename=/dev/nvme0n1 00:56:13.479 Could not set queue depth (nvme0n1) 00:56:13.479 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:13.479 fio-3.35 00:56:13.479 Starting 1 thread 00:56:14.047 10:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:56:14.306 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:56:14.565 10:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:56:15.504 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:56:15.504 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:15.504 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:56:15.504 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:56:15.763 10:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:56:17.141 10:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:56:17.141 10:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:17.141 10:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:56:17.141 10:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 92098 00:56:19.677 00:56:19.677 job0: (groupid=0, jobs=1): err= 0: pid=92124: Mon Jul 22 10:51:27 2024 00:56:19.677 read: IOPS=14.7k, BW=57.4MiB/s (60.1MB/s)(344MiB/6002msec) 00:56:19.677 slat (usec): min=3, max=3425, avg=36.00, stdev=141.52 00:56:19.677 clat (usec): min=243, max=12255, avg=5981.04, stdev=1054.47 00:56:19.677 lat (usec): min=272, max=12276, avg=6017.03, stdev=1058.64 00:56:19.677 clat percentiles (usec): 00:56:19.677 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5276], 00:56:19.677 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6128], 00:56:19.677 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 7111], 95.00th=[ 7898], 00:56:19.677 | 99.00th=[ 9241], 99.50th=[ 9765], 99.90th=[10552], 99.95th=[11207], 00:56:19.677 | 99.99th=[11994] 00:56:19.677 bw ( KiB/s): min=16128, max=38880, per=51.03%, avg=29973.09, stdev=7151.54, samples=11 00:56:19.677 iops : min= 4032, max= 9720, avg=7493.27, stdev=1787.89, samples=11 00:56:19.677 write: IOPS=8715, BW=34.0MiB/s (35.7MB/s)(179MiB/5257msec); 0 zone resets 00:56:19.677 slat (usec): min=4, max=2260, avg=50.39, stdev=85.41 00:56:19.677 clat (usec): min=121, max=11660, avg=5092.84, stdev=1048.14 00:56:19.677 lat (usec): min=224, max=11800, avg=5143.22, stdev=1049.85 00:56:19.677 clat percentiles (usec): 00:56:19.677 | 1.00th=[ 2474], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4424], 00:56:19.677 | 30.00th=[ 4686], 40.00th=[ 4883], 50.00th=[ 5080], 60.00th=[ 5276], 00:56:19.677 | 70.00th=[ 5407], 80.00th=[ 5669], 90.00th=[ 6063], 95.00th=[ 6783], 00:56:19.677 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[10028], 99.95th=[10421], 00:56:19.677 | 99.99th=[11207] 00:56:19.677 bw ( KiB/s): min=16880, max=38136, per=86.27%, avg=30073.27, stdev=6716.58, samples=11 00:56:19.677 iops : min= 4220, max= 9534, avg=7518.27, stdev=1679.09, samples=11 00:56:19.677 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:56:19.677 lat (msec) : 2=0.27%, 4=5.67%, 10=93.78%, 20=0.24% 00:56:19.677 cpu : usr=7.88%, sys=33.96%, ctx=11775, majf=0, minf=92 00:56:19.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:56:19.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:19.677 issued rwts: total=88136,45815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:19.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:19.677 00:56:19.678 Run status group 0 (all jobs): 00:56:19.678 READ: bw=57.4MiB/s (60.1MB/s), 57.4MiB/s-57.4MiB/s (60.1MB/s-60.1MB/s), io=344MiB (361MB), run=6002-6002msec 00:56:19.678 WRITE: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=179MiB (188MB), run=5257-5257msec 00:56:19.678 00:56:19.678 Disk stats (read/write): 00:56:19.678 nvme0n1: ios=86964/44929, merge=0/0, ticks=447626/183381, in_queue=631007, util=98.60% 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:56:19.678 10:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=92252 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:56:21.057 10:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:56:21.057 [global] 00:56:21.057 thread=1 00:56:21.057 invalidate=1 00:56:21.057 rw=randrw 00:56:21.057 time_based=1 00:56:21.057 runtime=6 00:56:21.057 ioengine=libaio 00:56:21.057 direct=1 00:56:21.057 bs=4096 00:56:21.057 iodepth=128 00:56:21.057 norandommap=0 00:56:21.057 numjobs=1 00:56:21.057 00:56:21.057 verify_dump=1 00:56:21.057 verify_backlog=512 00:56:21.057 verify_state_save=0 00:56:21.057 do_verify=1 00:56:21.057 verify=crc32c-intel 00:56:21.057 [job0] 00:56:21.057 filename=/dev/nvme0n1 00:56:21.057 Could not set queue depth (nvme0n1) 00:56:21.057 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:21.057 fio-3.35 00:56:21.058 Starting 1 thread 00:56:21.993 10:51:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:56:21.993 10:51:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:56:22.252 10:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:56:23.190 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:56:23.190 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:23.190 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:56:23.190 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:23.448 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:56:23.707 10:51:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:56:24.645 10:51:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:56:24.645 10:51:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:56:24.645 10:51:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:56:24.645 10:51:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 92252 00:56:27.191 00:56:27.191 job0: (groupid=0, jobs=1): err= 0: pid=92277: Mon Jul 22 10:51:34 2024 00:56:27.191 read: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(355MiB/6004msec) 00:56:27.191 slat (usec): min=4, max=3703, avg=31.48, stdev=124.89 00:56:27.191 clat (usec): min=417, max=47556, avg=5744.00, stdev=1592.87 00:56:27.191 lat (usec): min=434, max=47570, avg=5775.48, stdev=1598.46 00:56:27.191 clat percentiles (usec): 00:56:27.191 | 1.00th=[ 2999], 5.00th=[ 3785], 10.00th=[ 4228], 20.00th=[ 4883], 00:56:27.191 | 30.00th=[ 5211], 40.00th=[ 5473], 50.00th=[ 5735], 60.00th=[ 5932], 00:56:27.191 | 70.00th=[ 6194], 80.00th=[ 6456], 90.00th=[ 6980], 95.00th=[ 7701], 00:56:27.191 | 99.00th=[ 9372], 99.50th=[10159], 99.90th=[13566], 99.95th=[46924], 00:56:27.191 | 99.99th=[47449] 00:56:27.191 bw ( KiB/s): min=12088, max=48848, per=53.29%, avg=32298.27, stdev=11909.24, samples=11 00:56:27.191 iops : min= 3022, max=12212, avg=8074.55, stdev=2977.31, samples=11 00:56:27.191 write: IOPS=9380, BW=36.6MiB/s (38.4MB/s)(189MiB/5163msec); 0 zone resets 00:56:27.191 slat (usec): min=10, max=40893, avg=45.90, stdev=201.13 00:56:27.191 clat (usec): min=275, max=47223, avg=4908.02, stdev=1990.78 00:56:27.191 lat (usec): min=303, max=47290, avg=4953.92, stdev=2004.68 00:56:27.191 clat percentiles (usec): 00:56:27.191 | 1.00th=[ 2343], 5.00th=[ 2966], 10.00th=[ 3294], 20.00th=[ 3785], 00:56:27.191 | 30.00th=[ 4228], 40.00th=[ 4621], 50.00th=[ 4948], 60.00th=[ 5145], 00:56:27.191 | 70.00th=[ 5342], 80.00th=[ 5604], 90.00th=[ 6128], 95.00th=[ 6783], 00:56:27.191 | 99.00th=[ 9110], 99.50th=[ 9896], 99.90th=[45876], 99.95th=[46400], 00:56:27.191 | 99.99th=[46924] 00:56:27.191 bw ( KiB/s): min=12328, max=48352, per=86.14%, avg=32321.00, stdev=11630.65, samples=11 00:56:27.191 iops : min= 3082, max=12088, avg=8080.18, stdev=2907.66, samples=11 00:56:27.191 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:56:27.191 lat (msec) : 2=0.20%, 4=12.79%, 10=86.39%, 20=0.46%, 50=0.09% 00:56:27.191 cpu : usr=7.48%, sys=33.84%, ctx=11493, majf=0, minf=141 00:56:27.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:56:27.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:27.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:27.191 issued rwts: total=90979,48429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:27.191 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:27.191 00:56:27.191 Run status group 0 (all jobs): 00:56:27.191 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=355MiB (373MB), run=6004-6004msec 00:56:27.191 WRITE: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=189MiB (198MB), run=5163-5163msec 00:56:27.191 00:56:27.191 Disk stats (read/write): 00:56:27.191 nvme0n1: ios=90027/47647, merge=0/0, ticks=450151/190300, in_queue=640451, util=98.65% 00:56:27.191 10:51:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:56:27.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:56:27.191 10:51:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:56:27.191 10:51:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:56:27.191 10:51:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:56:27.191 10:51:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:27.191 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:56:27.191 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:27.191 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:56:27.191 10:51:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:27.449 rmmod nvme_tcp 00:56:27.449 rmmod nvme_fabrics 00:56:27.449 rmmod nvme_keyring 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 91966 ']' 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 91966 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 91966 ']' 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 91966 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91966 00:56:27.449 killing process with pid 91966 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91966' 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 91966 00:56:27.449 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 91966 00:56:27.707 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:27.707 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:27.707 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:56:27.708 ************************************ 00:56:27.708 END TEST nvmf_target_multipath 00:56:27.708 ************************************ 00:56:27.708 00:56:27.708 real 0m20.010s 00:56:27.708 user 1m16.694s 00:56:27.708 sys 0m8.377s 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:56:27.708 10:51:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:56:27.966 10:51:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:56:27.966 10:51:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:56:27.966 10:51:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:56:27.966 10:51:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:56:27.966 10:51:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:27.966 ************************************ 00:56:27.966 START TEST nvmf_zcopy 00:56:27.966 ************************************ 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:56:27.966 * Looking for test storage... 00:56:27.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.966 10:51:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:27.967 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:56:28.224 Cannot find device "nvmf_tgt_br" 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:56:28.224 Cannot find device "nvmf_tgt_br2" 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:56:28.224 Cannot find device "nvmf_tgt_br" 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:56:28.224 Cannot find device "nvmf_tgt_br2" 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:56:28.224 10:51:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:28.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:28.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:28.224 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:56:28.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:28.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:56:28.483 00:56:28.483 --- 10.0.0.2 ping statistics --- 00:56:28.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:28.483 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:56:28.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:28.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:56:28.483 00:56:28.483 --- 10.0.0.3 ping statistics --- 00:56:28.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:28.483 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:28.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:28.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:56:28.483 00:56:28.483 --- 10.0.0.1 ping statistics --- 00:56:28.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:28.483 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:56:28.483 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=92553 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 92553 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 92553 ']' 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:28.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:56:28.742 10:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:28.742 [2024-07-22 10:51:36.488467] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:56:28.742 [2024-07-22 10:51:36.488555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:28.742 [2024-07-22 10:51:36.607117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:56:28.742 [2024-07-22 10:51:36.631934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:28.742 [2024-07-22 10:51:36.672994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:28.742 [2024-07-22 10:51:36.673063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:28.742 [2024-07-22 10:51:36.673072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:28.742 [2024-07-22 10:51:36.673080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:28.742 [2024-07-22 10:51:36.673087] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:28.742 [2024-07-22 10:51:36.673110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 [2024-07-22 10:51:37.393507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 [2024-07-22 10:51:37.413561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 malloc0 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:56:29.678 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:56:29.678 { 00:56:29.678 "params": { 00:56:29.679 "name": "Nvme$subsystem", 00:56:29.679 "trtype": "$TEST_TRANSPORT", 00:56:29.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:29.679 "adrfam": "ipv4", 00:56:29.679 "trsvcid": "$NVMF_PORT", 00:56:29.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:29.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:29.679 "hdgst": ${hdgst:-false}, 00:56:29.679 "ddgst": ${ddgst:-false} 00:56:29.679 }, 00:56:29.679 "method": "bdev_nvme_attach_controller" 00:56:29.679 } 00:56:29.679 EOF 00:56:29.679 )") 00:56:29.679 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:56:29.679 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:56:29.679 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:56:29.679 10:51:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:56:29.679 "params": { 00:56:29.679 "name": "Nvme1", 00:56:29.679 "trtype": "tcp", 00:56:29.679 "traddr": "10.0.0.2", 00:56:29.679 "adrfam": "ipv4", 00:56:29.679 "trsvcid": "4420", 00:56:29.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:29.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:29.679 "hdgst": false, 00:56:29.679 "ddgst": false 00:56:29.679 }, 00:56:29.679 "method": "bdev_nvme_attach_controller" 00:56:29.679 }' 00:56:29.679 [2024-07-22 10:51:37.508921] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:56:29.679 [2024-07-22 10:51:37.508980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92604 ] 00:56:29.936 [2024-07-22 10:51:37.626639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:56:29.936 [2024-07-22 10:51:37.651117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:29.936 [2024-07-22 10:51:37.692816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:56:29.936 Running I/O for 10 seconds... 00:56:39.915 00:56:39.915 Latency(us) 00:56:39.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:39.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:56:39.915 Verification LBA range: start 0x0 length 0x1000 00:56:39.915 Nvme1n1 : 10.01 8622.18 67.36 0.00 0.00 14804.24 2421.41 24529.94 00:56:39.915 =================================================================================================================== 00:56:39.915 Total : 8622.18 67.36 0.00 0.00 14804.24 2421.41 24529.94 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=92720 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:56:40.173 { 00:56:40.173 "params": { 00:56:40.173 "name": "Nvme$subsystem", 00:56:40.173 "trtype": "$TEST_TRANSPORT", 00:56:40.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:40.173 "adrfam": "ipv4", 00:56:40.173 "trsvcid": "$NVMF_PORT", 00:56:40.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:40.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:40.173 "hdgst": ${hdgst:-false}, 00:56:40.173 "ddgst": ${ddgst:-false} 00:56:40.173 }, 00:56:40.173 "method": "bdev_nvme_attach_controller" 00:56:40.173 } 00:56:40.173 EOF 00:56:40.173 )") 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:56:40.173 [2024-07-22 10:51:48.027813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.173 [2024-07-22 10:51:48.027849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:56:40.173 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:56:40.173 10:51:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:56:40.173 "params": { 00:56:40.173 "name": "Nvme1", 00:56:40.173 "trtype": "tcp", 00:56:40.173 "traddr": "10.0.0.2", 00:56:40.173 "adrfam": "ipv4", 00:56:40.173 "trsvcid": "4420", 00:56:40.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:40.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:40.173 "hdgst": false, 00:56:40.174 "ddgst": false 00:56:40.174 }, 00:56:40.174 "method": "bdev_nvme_attach_controller" 00:56:40.174 }' 00:56:40.174 [2024-07-22 10:51:48.039769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.174 [2024-07-22 10:51:48.039790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.174 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.174 [2024-07-22 10:51:48.051747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.174 [2024-07-22 10:51:48.051770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.174 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.174 [2024-07-22 10:51:48.067716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.174 [2024-07-22 10:51:48.067734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.174 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.174 [2024-07-22 10:51:48.074714] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:56:40.174 [2024-07-22 10:51:48.074772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92720 ] 00:56:40.174 [2024-07-22 10:51:48.079697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.174 [2024-07-22 10:51:48.079718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.174 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.174 [2024-07-22 10:51:48.091686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.174 [2024-07-22 10:51:48.091705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.174 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.174 [2024-07-22 10:51:48.103670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.174 [2024-07-22 10:51:48.103692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.115663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.115684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.127632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.127653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.139624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.139645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.151622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.151645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.163596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.163619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.175578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.175600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.187569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.187590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.193696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:56:40.432 [2024-07-22 10:51:48.199542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.199564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.211524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.211543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.218326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:40.432 [2024-07-22 10:51:48.223510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.223533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.235493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.235515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.247479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.247508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.258543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:56:40.432 [2024-07-22 10:51:48.259461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.259483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.271446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.271468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.283432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.283455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.295411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.295432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.307395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.307417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.319379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.319399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.331353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.331372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.343367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.343398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.432 [2024-07-22 10:51:48.355363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.432 [2024-07-22 10:51:48.355388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.432 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.690 [2024-07-22 10:51:48.367351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.690 [2024-07-22 10:51:48.367379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.690 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.690 [2024-07-22 10:51:48.379339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.690 [2024-07-22 10:51:48.379366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.690 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.690 [2024-07-22 10:51:48.391317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.690 [2024-07-22 10:51:48.391343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.690 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.690 [2024-07-22 10:51:48.403298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.690 [2024-07-22 10:51:48.403343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.690 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.690 Running I/O for 5 seconds... 00:56:40.690 [2024-07-22 10:51:48.415254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.415282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.431441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.431471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.445749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.445783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.460072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.460104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.475375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.475409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.490003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.490037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.500787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.500817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.515540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.515572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.529870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.529904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.544146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.544178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.555253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.555294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.569835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.569871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.583938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.583972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.597958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.597992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.691 [2024-07-22 10:51:48.612110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.691 [2024-07-22 10:51:48.612143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.691 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.623029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.623062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.637718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.637751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.651918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.651948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.666368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.666414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.677076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.677106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.691868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.691901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.705797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.705829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.716556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.716587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.950 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.950 [2024-07-22 10:51:48.734529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.950 [2024-07-22 10:51:48.734562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.748823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.748854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.763039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.763073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.777169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.777203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.787715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.787747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.802413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.802446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.816144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.816178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.830696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.830730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.841612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.841641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.856123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.856157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.867146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.867179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:40.951 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:40.951 [2024-07-22 10:51:48.881505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:40.951 [2024-07-22 10:51:48.881539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.210 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.210 [2024-07-22 10:51:48.895476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.210 [2024-07-22 10:51:48.895509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.210 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.210 [2024-07-22 10:51:48.909548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.210 [2024-07-22 10:51:48.909578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:48.923640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:48.923670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:48.937678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:48.937713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:48.952000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:48.952033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:48.962973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:48.963006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:48.977612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:48.977647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:48.991728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:48.991760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.005932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.005967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.016656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.016689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.031348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.031381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.045392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.045424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.059508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.059540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.073561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.073592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.087689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.087723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.104689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.104721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.121775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.121808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.211 [2024-07-22 10:51:49.136609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.211 [2024-07-22 10:51:49.136641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.211 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.470 [2024-07-22 10:51:49.150183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.470 [2024-07-22 10:51:49.150217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.470 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.470 [2024-07-22 10:51:49.164758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.470 [2024-07-22 10:51:49.164789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.470 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.470 [2024-07-22 10:51:49.176148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.470 [2024-07-22 10:51:49.176181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.470 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.470 [2024-07-22 10:51:49.190505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.470 [2024-07-22 10:51:49.190548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.470 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.470 [2024-07-22 10:51:49.204576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.470 [2024-07-22 10:51:49.204607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.470 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.470 [2024-07-22 10:51:49.218856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.470 [2024-07-22 10:51:49.218885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.470 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.233000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.233033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.243862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.243894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.258469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.258501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.273778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.273809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.287937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.287971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.302006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.302039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.316336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.316367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.330563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.330596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.344878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.344912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.358911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.358943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.372959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.372992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.387069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.387103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.471 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.471 [2024-07-22 10:51:49.401352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.471 [2024-07-22 10:51:49.401410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.730 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.730 [2024-07-22 10:51:49.415282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.730 [2024-07-22 10:51:49.415315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.730 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.730 [2024-07-22 10:51:49.429326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.730 [2024-07-22 10:51:49.429364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.730 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.730 [2024-07-22 10:51:49.443192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.730 [2024-07-22 10:51:49.443225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.730 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.730 [2024-07-22 10:51:49.457615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.730 [2024-07-22 10:51:49.457647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.730 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.471663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.471696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.486515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.486548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.501160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.501191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.515312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.515344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.529138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.529171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.543344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.543377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.557493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.557523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.571836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.571868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.586046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.586079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.596925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.596954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.611535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.611569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.622309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.622342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.636613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.636644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.731 [2024-07-22 10:51:49.650922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.731 [2024-07-22 10:51:49.650954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.731 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.990 [2024-07-22 10:51:49.664962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.990 [2024-07-22 10:51:49.664995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.990 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.990 [2024-07-22 10:51:49.679322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.990 [2024-07-22 10:51:49.679354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.990 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.990 [2024-07-22 10:51:49.693673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.990 [2024-07-22 10:51:49.693705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.990 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.990 [2024-07-22 10:51:49.708055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.990 [2024-07-22 10:51:49.708089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.723271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.723313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.737678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.737710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.751729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.751760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.765976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.766008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.780007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.780041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.790528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.790562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.805104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.805137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.818975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.819009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.833339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.833380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.847618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.847652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.861725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.861757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.872518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.872548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.887156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.887187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.898245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.898286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:41.991 [2024-07-22 10:51:49.912983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:41.991 [2024-07-22 10:51:49.913014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:41.991 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.249 [2024-07-22 10:51:49.926709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.249 [2024-07-22 10:51:49.926741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.249 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.249 [2024-07-22 10:51:49.940875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.249 [2024-07-22 10:51:49.940904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.249 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.249 [2024-07-22 10:51:49.954844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.249 [2024-07-22 10:51:49.954879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.249 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.249 [2024-07-22 10:51:49.969368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.249 [2024-07-22 10:51:49.969400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.249 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.249 [2024-07-22 10:51:49.980116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.249 [2024-07-22 10:51:49.980146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.249 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.249 [2024-07-22 10:51:49.995083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:49.995116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.010605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.010639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.025253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.025294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.036488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.036518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.050911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.050944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.065007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.065041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.076331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.076363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.091608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.091639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.107160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.107194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.121747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.121783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.138890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.138920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.153216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.153247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.250 [2024-07-22 10:51:50.167531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.250 [2024-07-22 10:51:50.167564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.250 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.181651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.181685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.196020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.196052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.209966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.210000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.224176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.224209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.238837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.238870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.253908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.253942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.268656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.268688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.283798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.283830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.298162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.298195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.508 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.508 [2024-07-22 10:51:50.312773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.508 [2024-07-22 10:51:50.312805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.328168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.328202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.342677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.342709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.356716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.356748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.371037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.371069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.386492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.386523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.401384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.401435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.416800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.416829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.509 [2024-07-22 10:51:50.432001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.509 [2024-07-22 10:51:50.432033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.509 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.767 [2024-07-22 10:51:50.447506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.767 [2024-07-22 10:51:50.447539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.461226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.461257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.475652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.475685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.486607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.486639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.501068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.501100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.515034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.515066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.529403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.529433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.540460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.540490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.555039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.555072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.569070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.569103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.583223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.583254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.597246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.597284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.611644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.611675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.622730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.622762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.637291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.637323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.651606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.651638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.665843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.665876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.680055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.680088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:42.768 [2024-07-22 10:51:50.690722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:42.768 [2024-07-22 10:51:50.690755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:42.768 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.705002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.705035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.719235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.719275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.733445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.733476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.747585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.747617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.761580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.761612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.775637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.775669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.786776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.786809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.801342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.801378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.814970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.815004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.829422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.829454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.840190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.840220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.854770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.854804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.027 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.027 [2024-07-22 10:51:50.868610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.027 [2024-07-22 10:51:50.868642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.028 [2024-07-22 10:51:50.882943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.028 [2024-07-22 10:51:50.882975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.028 [2024-07-22 10:51:50.896849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.028 [2024-07-22 10:51:50.896880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.028 [2024-07-22 10:51:50.910850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.028 [2024-07-22 10:51:50.910883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.028 [2024-07-22 10:51:50.924992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.028 [2024-07-22 10:51:50.925023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.028 [2024-07-22 10:51:50.940952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.028 [2024-07-22 10:51:50.940982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.028 [2024-07-22 10:51:50.955373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.028 [2024-07-22 10:51:50.955406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.028 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:50.966355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:50.966389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:50.981517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:50.981549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:50.997525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:50.997556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:51.012170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:51.012202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:51.023439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:51.023470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:51.038923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:51.038953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:51.054409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:51.054440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.286 [2024-07-22 10:51:51.069167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.286 [2024-07-22 10:51:51.069200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.286 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.085320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.085349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.096347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.096376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.112262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.112304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.128063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.128094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.142845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.142877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.162065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.162091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.177185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.177214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.193378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.193411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.287 [2024-07-22 10:51:51.204767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.287 [2024-07-22 10:51:51.204796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.287 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.220221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.220253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.235940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.235973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.249713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.249743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.264461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.264491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.280106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.280137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.295006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.295038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.311321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.311365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.322841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.322873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.338071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.338104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.353492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.353523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.368149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.368183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.383131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.383164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.394072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.394105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.409553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.409585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.425572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.425601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.440581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.440611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.455934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.455967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.546 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.546 [2024-07-22 10:51:51.470828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.546 [2024-07-22 10:51:51.470858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.547 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.805 [2024-07-22 10:51:51.485939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.805 [2024-07-22 10:51:51.485972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.805 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.805 [2024-07-22 10:51:51.501569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.805 [2024-07-22 10:51:51.501602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.805 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.805 [2024-07-22 10:51:51.517563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.805 [2024-07-22 10:51:51.517594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.805 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.805 [2024-07-22 10:51:51.532303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.805 [2024-07-22 10:51:51.532333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.548002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.548032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.562912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.562941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.578719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.578752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.593138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.593170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.608063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.608094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.624066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.624098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.638445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.638478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.653136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.653165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.669022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.669051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.683467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.683499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.694840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.694872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.710309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.710341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:43.806 [2024-07-22 10:51:51.725589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:43.806 [2024-07-22 10:51:51.725622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:43.806 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.065 [2024-07-22 10:51:51.739982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.065 [2024-07-22 10:51:51.740014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.065 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.065 [2024-07-22 10:51:51.755961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.065 [2024-07-22 10:51:51.755991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.065 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.065 [2024-07-22 10:51:51.767343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.065 [2024-07-22 10:51:51.767373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.065 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.065 [2024-07-22 10:51:51.782262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.065 [2024-07-22 10:51:51.782302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.065 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.065 [2024-07-22 10:51:51.793728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.065 [2024-07-22 10:51:51.793759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.065 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.065 [2024-07-22 10:51:51.808366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.065 [2024-07-22 10:51:51.808396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.819815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.819847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.835440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.835470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.850944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.850976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.866387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.866417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.881987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.882018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.896718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.896749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.912566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.912597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.926959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.926990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.938366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.938399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.953737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.953769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.968564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.968593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.982883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.982916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.066 [2024-07-22 10:51:51.994264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.066 [2024-07-22 10:51:51.994309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.066 2024/07/22 10:51:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.008949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.008981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.019732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.019761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.035017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.035048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.046036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.046069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.061465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.061499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.076550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.076579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.091910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.091941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.107235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.107277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.121139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.121171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.135995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.136027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.151991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.152023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.166404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.166436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.177889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.177921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.193249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.193298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.208396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.208424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.223557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.223589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.239350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.239383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.326 [2024-07-22 10:51:52.250761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.326 [2024-07-22 10:51:52.250793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.326 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.266386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.266418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.281578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.281610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.295643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.295676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.309652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.309684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.324962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.324994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.340288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.340328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.355550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.355581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.371098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.371130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.386617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.386649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.402500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.402531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.417360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.417399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.433027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.433056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.448247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.448303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.464293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.464325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.475568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.475600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.490833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.490865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.586 [2024-07-22 10:51:52.506900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.586 [2024-07-22 10:51:52.506931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.586 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.521799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.521830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.537372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.537417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.552128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.552159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.563410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.563440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.578845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.578879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.594938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.594970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.609841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.609872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.625279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.625308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.640048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.640079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.655497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.655529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.670519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.670551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.686143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.686174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.700995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.701027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.716993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.717024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.846 [2024-07-22 10:51:52.728557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.846 [2024-07-22 10:51:52.728587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.846 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.847 [2024-07-22 10:51:52.744071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.847 [2024-07-22 10:51:52.744103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.847 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.847 [2024-07-22 10:51:52.759766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.847 [2024-07-22 10:51:52.759798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:44.847 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:44.847 [2024-07-22 10:51:52.774922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:44.847 [2024-07-22 10:51:52.774952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.106 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.106 [2024-07-22 10:51:52.791100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.106 [2024-07-22 10:51:52.791132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.106 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.106 [2024-07-22 10:51:52.807213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.106 [2024-07-22 10:51:52.807245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.106 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.106 [2024-07-22 10:51:52.818113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.106 [2024-07-22 10:51:52.818146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.106 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.106 [2024-07-22 10:51:52.833153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.106 [2024-07-22 10:51:52.833185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.844513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.844542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.859906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.859938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.875570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.875602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.890311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.890343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.904985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.905018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.915920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.915947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.931429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.931461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.946797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.946829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.961716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.961747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.977300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.977331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:52.992210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:52.992240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:53.007149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:53.007179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:53.023001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:53.023028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.107 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.107 [2024-07-22 10:51:53.037990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.107 [2024-07-22 10:51:53.038023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.053676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.053707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.068809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.068839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.084756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.084786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.100305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.100336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.115199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.115230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.130795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.130827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.146208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.146242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.366 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.366 [2024-07-22 10:51:53.161300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.366 [2024-07-22 10:51:53.161345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.176228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.176260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.192382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.192413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.206458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.206501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.224596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.224629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.240291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.240322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.255490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.255521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.269842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.269875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.280739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.280768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.367 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.367 [2024-07-22 10:51:53.296299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.367 [2024-07-22 10:51:53.296330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.311603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.311631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.326611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.326644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.338070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.338103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.353920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.353953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.369979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.370011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.381513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.381545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.397115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.397147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.411149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.411182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 00:56:45.626 Latency(us) 00:56:45.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:45.626 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:56:45.626 Nvme1n1 : 5.01 15975.46 124.81 0.00 0.00 8003.76 3658.44 18107.94 00:56:45.626 =================================================================================================================== 00:56:45.626 Total : 15975.46 124.81 0.00 0.00 8003.76 3658.44 18107.94 00:56:45.626 [2024-07-22 10:51:53.420941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.420969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.432919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.432945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.444907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.444925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.456881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.626 [2024-07-22 10:51:53.456903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.626 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.626 [2024-07-22 10:51:53.468864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.468885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.480846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.480868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.492827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.492846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.504813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.504834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.516797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.516819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.528782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.528804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.540763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.540785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.627 [2024-07-22 10:51:53.552743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.627 [2024-07-22 10:51:53.552762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.627 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.885 [2024-07-22 10:51:53.564729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.885 [2024-07-22 10:51:53.564751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.885 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.885 [2024-07-22 10:51:53.576711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.885 [2024-07-22 10:51:53.576730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.885 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.885 [2024-07-22 10:51:53.588694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.885 [2024-07-22 10:51:53.588715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.885 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.600678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.600699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.612661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.612682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.624643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.624665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.636627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.636648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.648611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.648633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.660595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.660617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.672581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.672603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 [2024-07-22 10:51:53.684564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:56:45.886 [2024-07-22 10:51:53.684586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:45.886 2024/07/22 10:51:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:45.886 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (92720) - No such process 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 92720 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:45.886 delay0 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:45.886 10:51:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:56:46.145 [2024-07-22 10:51:53.905004] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:56:54.281 Initializing NVMe Controllers 00:56:54.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:56:54.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:56:54.281 Initialization complete. Launching workers. 00:56:54.281 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 22917 00:56:54.281 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23099, failed to submit 85 00:56:54.281 success 22966, unsuccess 133, failed 0 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:54.281 10:52:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:54.281 rmmod nvme_tcp 00:56:54.281 rmmod nvme_fabrics 00:56:54.281 rmmod nvme_keyring 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 92553 ']' 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 92553 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 92553 ']' 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 92553 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92553 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92553' 00:56:54.281 killing process with pid 92553 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 92553 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 92553 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:56:54.281 00:56:54.281 real 0m25.616s 00:56:54.281 user 0m39.455s 00:56:54.281 sys 0m9.325s 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:56:54.281 10:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:54.281 ************************************ 00:56:54.281 END TEST nvmf_zcopy 00:56:54.281 ************************************ 00:56:54.281 10:52:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:56:54.281 10:52:01 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:56:54.281 10:52:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:56:54.281 10:52:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:56:54.281 10:52:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:54.281 ************************************ 00:56:54.281 START TEST nvmf_nmic 00:56:54.281 ************************************ 00:56:54.281 10:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:56:54.281 * Looking for test storage... 00:56:54.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:56:54.282 Cannot find device "nvmf_tgt_br" 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:56:54.282 Cannot find device "nvmf_tgt_br2" 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:56:54.282 Cannot find device "nvmf_tgt_br" 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:56:54.282 Cannot find device "nvmf_tgt_br2" 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:54.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:54.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:54.282 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:56:54.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:54.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:56:54.283 00:56:54.283 --- 10.0.0.2 ping statistics --- 00:56:54.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:54.283 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:56:54.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:54.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:56:54.283 00:56:54.283 --- 10.0.0.3 ping statistics --- 00:56:54.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:54.283 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:54.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:54.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:56:54.283 00:56:54.283 --- 10.0.0.1 ping statistics --- 00:56:54.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:54.283 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:56:54.283 10:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=93052 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 93052 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 93052 ']' 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:56:54.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:56:54.283 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:54.283 [2024-07-22 10:52:02.086078] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:56:54.283 [2024-07-22 10:52:02.086139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:54.549 [2024-07-22 10:52:02.207194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:56:54.549 [2024-07-22 10:52:02.231988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:54.549 [2024-07-22 10:52:02.299284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:54.549 [2024-07-22 10:52:02.299335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:54.549 [2024-07-22 10:52:02.299345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:54.549 [2024-07-22 10:52:02.299353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:54.549 [2024-07-22 10:52:02.299359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:54.549 [2024-07-22 10:52:02.300189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:56:54.549 [2024-07-22 10:52:02.300326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:56:54.549 [2024-07-22 10:52:02.300439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:56:54.549 [2024-07-22 10:52:02.300443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.116 10:52:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.116 [2024-07-22 10:52:02.993598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.116 Malloc0 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.116 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.373 [2024-07-22 10:52:03.077374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:55.373 test case1: single bdev can't be used in multiple subsystems 00:56:55.373 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.374 [2024-07-22 10:52:03.113171] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:56:55.374 [2024-07-22 10:52:03.113199] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:56:55.374 [2024-07-22 10:52:03.113209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:55.374 2024/07/22 10:52:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:55.374 request: 00:56:55.374 { 00:56:55.374 "method": "nvmf_subsystem_add_ns", 00:56:55.374 "params": { 00:56:55.374 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:56:55.374 "namespace": { 00:56:55.374 "bdev_name": "Malloc0", 00:56:55.374 "no_auto_visible": false 00:56:55.374 } 00:56:55.374 } 00:56:55.374 } 00:56:55.374 Got JSON-RPC error response 00:56:55.374 GoRPCClient: error on JSON-RPC call 00:56:55.374 Adding namespace failed - expected result. 00:56:55.374 test case2: host connect to nvmf target in multiple paths 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:55.374 [2024-07-22 10:52:03.129318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:56:55.374 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:56:55.632 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:56:55.632 10:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:56:55.632 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:56:55.632 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:56:55.632 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:56:55.632 10:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:56:58.162 10:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:56:58.162 [global] 00:56:58.162 thread=1 00:56:58.162 invalidate=1 00:56:58.162 rw=write 00:56:58.162 time_based=1 00:56:58.162 runtime=1 00:56:58.162 ioengine=libaio 00:56:58.162 direct=1 00:56:58.162 bs=4096 00:56:58.162 iodepth=1 00:56:58.162 norandommap=0 00:56:58.162 numjobs=1 00:56:58.162 00:56:58.162 verify_dump=1 00:56:58.162 verify_backlog=512 00:56:58.162 verify_state_save=0 00:56:58.162 do_verify=1 00:56:58.162 verify=crc32c-intel 00:56:58.162 [job0] 00:56:58.162 filename=/dev/nvme0n1 00:56:58.162 Could not set queue depth (nvme0n1) 00:56:58.162 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:58.162 fio-3.35 00:56:58.162 Starting 1 thread 00:56:59.105 00:56:59.105 job0: (groupid=0, jobs=1): err= 0: pid=93166: Mon Jul 22 10:52:06 2024 00:56:59.105 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:56:59.105 slat (nsec): min=7098, max=24631, avg=8501.45, stdev=912.80 00:56:59.105 clat (usec): min=100, max=679, avg=129.27, stdev=14.96 00:56:59.105 lat (usec): min=109, max=687, avg=137.77, stdev=15.01 00:56:59.105 clat percentiles (usec): 00:56:59.105 | 1.00th=[ 110], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 121], 00:56:59.105 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:56:59.105 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:56:59.105 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 227], 99.95th=[ 322], 00:56:59.105 | 99.99th=[ 676] 00:56:59.105 write: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1001msec); 0 zone resets 00:56:59.105 slat (usec): min=12, max=148, avg=14.11, stdev= 5.35 00:56:59.105 clat (usec): min=65, max=224, avg=88.02, stdev= 9.11 00:56:59.105 lat (usec): min=77, max=309, avg=102.12, stdev=11.69 00:56:59.105 clat percentiles (usec): 00:56:59.105 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:56:59.105 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 90], 00:56:59.105 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 103], 00:56:59.105 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 137], 99.95th=[ 161], 00:56:59.105 | 99.99th=[ 225] 00:56:59.105 bw ( KiB/s): min=16640, max=16640, per=99.31%, avg=16640.00, stdev= 0.00, samples=1 00:56:59.105 iops : min= 4160, max= 4160, avg=4160.00, stdev= 0.00, samples=1 00:56:59.105 lat (usec) : 100=46.17%, 250=53.79%, 500=0.02%, 750=0.01% 00:56:59.105 cpu : usr=1.70%, sys=7.10%, ctx=8289, majf=0, minf=2 00:56:59.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:59.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:59.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:59.105 issued rwts: total=4096,4193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:59.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:59.105 00:56:59.105 Run status group 0 (all jobs): 00:56:59.105 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:56:59.105 WRITE: bw=16.4MiB/s (17.2MB/s), 16.4MiB/s-16.4MiB/s (17.2MB/s-17.2MB/s), io=16.4MiB (17.2MB), run=1001-1001msec 00:56:59.105 00:56:59.105 Disk stats (read/write): 00:56:59.105 nvme0n1: ios=3634/3909, merge=0/0, ticks=478/366, in_queue=844, util=91.08% 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:56:59.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:59.105 10:52:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:59.105 rmmod nvme_tcp 00:56:59.105 rmmod nvme_fabrics 00:56:59.105 rmmod nvme_keyring 00:56:59.105 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 93052 ']' 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 93052 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 93052 ']' 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 93052 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93052 00:56:59.363 killing process with pid 93052 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93052' 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 93052 00:56:59.363 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 93052 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:56:59.621 00:56:59.621 real 0m6.044s 00:56:59.621 user 0m19.860s 00:56:59.621 sys 0m1.446s 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:56:59.621 10:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:59.621 ************************************ 00:56:59.621 END TEST nvmf_nmic 00:56:59.621 ************************************ 00:56:59.621 10:52:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:56:59.621 10:52:07 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:56:59.621 10:52:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:56:59.621 10:52:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:56:59.621 10:52:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:59.621 ************************************ 00:56:59.621 START TEST nvmf_fio_target 00:56:59.621 ************************************ 00:56:59.621 10:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:56:59.880 * Looking for test storage... 00:56:59.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:56:59.880 10:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:56:59.881 Cannot find device "nvmf_tgt_br" 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:56:59.881 Cannot find device "nvmf_tgt_br2" 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:56:59.881 Cannot find device "nvmf_tgt_br" 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:56:59.881 Cannot find device "nvmf_tgt_br2" 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:56:59.881 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:57:00.139 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:57:00.139 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:00.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:00.139 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:57:00.139 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:00.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:00.139 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:00.140 10:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:00.140 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:00.140 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:57:00.140 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:57:00.140 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:57:00.140 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:00.140 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:57:00.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:00.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:57:00.398 00:57:00.398 --- 10.0.0.2 ping statistics --- 00:57:00.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:00.398 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:57:00.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:00.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:57:00.398 00:57:00.398 --- 10.0.0.3 ping statistics --- 00:57:00.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:00.398 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:00.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:00.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:57:00.398 00:57:00.398 --- 10.0.0.1 ping statistics --- 00:57:00.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:00.398 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=93348 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 93348 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 93348 ']' 00:57:00.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:57:00.398 10:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:00.398 [2024-07-22 10:52:08.207622] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:57:00.398 [2024-07-22 10:52:08.207679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:00.398 [2024-07-22 10:52:08.326828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:57:00.657 [2024-07-22 10:52:08.351175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:00.657 [2024-07-22 10:52:08.391904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:00.657 [2024-07-22 10:52:08.392101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:00.657 [2024-07-22 10:52:08.392152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:00.657 [2024-07-22 10:52:08.392204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:00.657 [2024-07-22 10:52:08.392273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:00.657 [2024-07-22 10:52:08.392558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:57:00.657 [2024-07-22 10:52:08.392758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:57:00.657 [2024-07-22 10:52:08.393568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:57:00.657 [2024-07-22 10:52:08.393568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:57:01.223 10:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:57:01.224 10:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:57:01.224 10:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:57:01.224 10:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:57:01.224 10:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 10:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:01.224 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:57:01.482 [2024-07-22 10:52:09.268467] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:01.482 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:01.741 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:57:01.741 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:02.000 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:57:02.000 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:02.259 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:57:02.259 10:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:02.259 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:57:02.259 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:57:02.519 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:02.777 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:57:02.777 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:03.036 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:57:03.036 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:03.332 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:57:03.332 10:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:57:03.332 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:57:03.593 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:57:03.593 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:03.851 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:57:03.851 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:03.852 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:57:04.110 [2024-07-22 10:52:11.918387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:04.110 10:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:57:04.367 10:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:57:04.625 10:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:57:07.162 10:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:57:07.162 10:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:57:07.162 10:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:57:07.162 10:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:57:07.162 10:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:57:07.163 10:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:57:07.163 10:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:57:07.163 [global] 00:57:07.163 thread=1 00:57:07.163 invalidate=1 00:57:07.163 rw=write 00:57:07.163 time_based=1 00:57:07.163 runtime=1 00:57:07.163 ioengine=libaio 00:57:07.163 direct=1 00:57:07.163 bs=4096 00:57:07.163 iodepth=1 00:57:07.163 norandommap=0 00:57:07.163 numjobs=1 00:57:07.163 00:57:07.163 verify_dump=1 00:57:07.163 verify_backlog=512 00:57:07.163 verify_state_save=0 00:57:07.163 do_verify=1 00:57:07.163 verify=crc32c-intel 00:57:07.163 [job0] 00:57:07.163 filename=/dev/nvme0n1 00:57:07.163 [job1] 00:57:07.163 filename=/dev/nvme0n2 00:57:07.163 [job2] 00:57:07.163 filename=/dev/nvme0n3 00:57:07.163 [job3] 00:57:07.163 filename=/dev/nvme0n4 00:57:07.163 Could not set queue depth (nvme0n1) 00:57:07.163 Could not set queue depth (nvme0n2) 00:57:07.163 Could not set queue depth (nvme0n3) 00:57:07.163 Could not set queue depth (nvme0n4) 00:57:07.163 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:07.163 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:07.163 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:07.163 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:07.163 fio-3.35 00:57:07.163 Starting 4 threads 00:57:08.096 00:57:08.096 job0: (groupid=0, jobs=1): err= 0: pid=93629: Mon Jul 22 10:52:15 2024 00:57:08.096 read: IOPS=1320, BW=5283KiB/s (5410kB/s)(5288KiB/1001msec) 00:57:08.096 slat (nsec): min=15645, max=67578, avg=27245.37, stdev=5539.40 00:57:08.096 clat (usec): min=215, max=5048, avg=397.66, stdev=152.62 00:57:08.096 lat (usec): min=242, max=5078, avg=424.90, stdev=153.15 00:57:08.096 clat percentiles (usec): 00:57:08.096 | 1.00th=[ 247], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 330], 00:57:08.096 | 30.00th=[ 351], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 400], 00:57:08.096 | 70.00th=[ 424], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 529], 00:57:08.096 | 99.00th=[ 594], 99.50th=[ 635], 99.90th=[ 1450], 99.95th=[ 5080], 00:57:08.096 | 99.99th=[ 5080] 00:57:08.096 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:57:08.096 slat (usec): min=25, max=109, avg=40.44, stdev= 9.30 00:57:08.096 clat (usec): min=97, max=4204, avg=239.84, stdev=118.01 00:57:08.096 lat (usec): min=127, max=4251, avg=280.28, stdev=119.74 00:57:08.096 clat percentiles (usec): 00:57:08.096 | 1.00th=[ 121], 5.00th=[ 147], 10.00th=[ 159], 20.00th=[ 182], 00:57:08.096 | 30.00th=[ 202], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 251], 00:57:08.096 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 338], 00:57:08.096 | 99.00th=[ 367], 99.50th=[ 408], 99.90th=[ 510], 99.95th=[ 4228], 00:57:08.096 | 99.99th=[ 4228] 00:57:08.096 bw ( KiB/s): min= 8192, max= 8192, per=31.15%, avg=8192.00, stdev= 0.00, samples=1 00:57:08.096 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:57:08.096 lat (usec) : 100=0.07%, 250=32.44%, 500=63.26%, 750=4.06%, 1000=0.03% 00:57:08.096 lat (msec) : 2=0.07%, 10=0.07% 00:57:08.096 cpu : usr=1.60%, sys=7.60%, ctx=2858, majf=0, minf=13 00:57:08.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.096 issued rwts: total=1322,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:08.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:08.096 job1: (groupid=0, jobs=1): err= 0: pid=93630: Mon Jul 22 10:52:15 2024 00:57:08.096 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:57:08.096 slat (nsec): min=14706, max=61105, avg=27832.78, stdev=5807.90 00:57:08.096 clat (usec): min=147, max=1340, avg=322.99, stdev=60.56 00:57:08.096 lat (usec): min=168, max=1364, avg=350.83, stdev=62.42 00:57:08.096 clat percentiles (usec): 00:57:08.096 | 1.00th=[ 192], 5.00th=[ 243], 10.00th=[ 260], 20.00th=[ 281], 00:57:08.096 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 343], 00:57:08.096 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 396], 00:57:08.096 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 1156], 99.95th=[ 1336], 00:57:08.096 | 99.99th=[ 1336] 00:57:08.096 write: IOPS=1612, BW=6448KiB/s (6603kB/s)(6448KiB/1000msec); 0 zone resets 00:57:08.096 slat (usec): min=18, max=134, avg=38.58, stdev=11.70 00:57:08.096 clat (usec): min=94, max=766, avg=241.93, stdev=46.56 00:57:08.096 lat (usec): min=119, max=789, avg=280.51, stdev=53.24 00:57:08.096 clat percentiles (usec): 00:57:08.096 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 186], 20.00th=[ 202], 00:57:08.096 | 30.00th=[ 215], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 255], 00:57:08.096 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 314], 00:57:08.096 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 766], 00:57:08.096 | 99.99th=[ 766] 00:57:08.096 bw ( KiB/s): min= 8048, max= 8048, per=30.60%, avg=8048.00, stdev= 0.00, samples=1 00:57:08.096 iops : min= 2012, max= 2012, avg=2012.00, stdev= 0.00, samples=1 00:57:08.096 lat (usec) : 100=0.06%, 250=32.88%, 500=66.93%, 1000=0.06% 00:57:08.096 lat (msec) : 2=0.06% 00:57:08.096 cpu : usr=2.40%, sys=7.80%, ctx=3150, majf=0, minf=7 00:57:08.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:08.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.097 issued rwts: total=1536,1612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:08.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:08.097 job2: (groupid=0, jobs=1): err= 0: pid=93631: Mon Jul 22 10:52:15 2024 00:57:08.097 read: IOPS=1414, BW=5658KiB/s (5794kB/s)(5664KiB/1001msec) 00:57:08.097 slat (nsec): min=21048, max=92340, avg=30382.50, stdev=4673.02 00:57:08.097 clat (usec): min=181, max=1070, avg=337.78, stdev=45.81 00:57:08.097 lat (usec): min=209, max=1101, avg=368.16, stdev=46.08 00:57:08.097 clat percentiles (usec): 00:57:08.097 | 1.00th=[ 237], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 302], 00:57:08.097 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:57:08.097 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 404], 00:57:08.097 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 474], 99.95th=[ 1074], 00:57:08.097 | 99.99th=[ 1074] 00:57:08.097 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:57:08.097 slat (usec): min=27, max=101, avg=46.20, stdev= 6.46 00:57:08.097 clat (usec): min=144, max=376, avg=259.28, stdev=37.99 00:57:08.097 lat (usec): min=193, max=424, avg=305.48, stdev=39.42 00:57:08.097 clat percentiles (usec): 00:57:08.097 | 1.00th=[ 172], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 227], 00:57:08.097 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:57:08.097 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:57:08.097 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 375], 00:57:08.097 | 99.99th=[ 375] 00:57:08.097 bw ( KiB/s): min= 7592, max= 7592, per=28.87%, avg=7592.00, stdev= 0.00, samples=1 00:57:08.097 iops : min= 1898, max= 1898, avg=1898.00, stdev= 0.00, samples=1 00:57:08.097 lat (usec) : 250=21.14%, 500=78.83% 00:57:08.097 lat (msec) : 2=0.03% 00:57:08.097 cpu : usr=2.40%, sys=8.50%, ctx=2952, majf=0, minf=9 00:57:08.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:08.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.097 issued rwts: total=1416,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:08.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:08.097 job3: (groupid=0, jobs=1): err= 0: pid=93632: Mon Jul 22 10:52:15 2024 00:57:08.097 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:57:08.097 slat (nsec): min=11052, max=54152, avg=17430.58, stdev=6042.60 00:57:08.097 clat (usec): min=152, max=7582, avg=332.56, stdev=303.51 00:57:08.097 lat (usec): min=166, max=7595, avg=349.99, stdev=304.22 00:57:08.097 clat percentiles (usec): 00:57:08.097 | 1.00th=[ 165], 5.00th=[ 188], 10.00th=[ 219], 20.00th=[ 265], 00:57:08.097 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 334], 00:57:08.097 | 70.00th=[ 351], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 424], 00:57:08.097 | 99.00th=[ 502], 99.50th=[ 586], 99.90th=[ 7177], 99.95th=[ 7570], 00:57:08.097 | 99.99th=[ 7570] 00:57:08.097 write: IOPS=1895, BW=7580KiB/s (7762kB/s)(7588KiB/1001msec); 0 zone resets 00:57:08.097 slat (usec): min=15, max=101, avg=30.54, stdev=10.32 00:57:08.097 clat (usec): min=88, max=3140, avg=210.59, stdev=87.91 00:57:08.097 lat (usec): min=109, max=3174, avg=241.12, stdev=90.18 00:57:08.097 clat percentiles (usec): 00:57:08.097 | 1.00th=[ 103], 5.00th=[ 120], 10.00th=[ 141], 20.00th=[ 161], 00:57:08.097 | 30.00th=[ 176], 40.00th=[ 190], 50.00th=[ 210], 60.00th=[ 225], 00:57:08.097 | 70.00th=[ 239], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 297], 00:57:08.097 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 1020], 99.95th=[ 3130], 00:57:08.097 | 99.99th=[ 3130] 00:57:08.097 bw ( KiB/s): min= 8192, max= 8192, per=31.15%, avg=8192.00, stdev= 0.00, samples=1 00:57:08.097 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:57:08.097 lat (usec) : 100=0.35%, 250=49.29%, 500=49.78%, 750=0.32%, 1000=0.03% 00:57:08.097 lat (msec) : 2=0.06%, 4=0.09%, 10=0.09% 00:57:08.097 cpu : usr=0.90%, sys=6.40%, ctx=3433, majf=0, minf=6 00:57:08.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:08.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:08.097 issued rwts: total=1536,1897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:08.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:08.097 00:57:08.097 Run status group 0 (all jobs): 00:57:08.097 READ: bw=22.7MiB/s (23.8MB/s), 5283KiB/s-6144KiB/s (5410kB/s-6291kB/s), io=22.7MiB (23.8MB), run=1000-1001msec 00:57:08.097 WRITE: bw=25.7MiB/s (26.9MB/s), 6138KiB/s-7580KiB/s (6285kB/s-7762kB/s), io=25.7MiB (27.0MB), run=1000-1001msec 00:57:08.097 00:57:08.097 Disk stats (read/write): 00:57:08.097 nvme0n1: ios=1113/1536, merge=0/0, ticks=428/380, in_queue=808, util=88.87% 00:57:08.097 nvme0n2: ios=1245/1536, merge=0/0, ticks=459/399, in_queue=858, util=94.24% 00:57:08.097 nvme0n3: ios=1100/1536, merge=0/0, ticks=446/426, in_queue=872, util=94.56% 00:57:08.097 nvme0n4: ios=1406/1536, merge=0/0, ticks=461/349, in_queue=810, util=88.96% 00:57:08.097 10:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:57:08.097 [global] 00:57:08.097 thread=1 00:57:08.097 invalidate=1 00:57:08.097 rw=randwrite 00:57:08.097 time_based=1 00:57:08.097 runtime=1 00:57:08.097 ioengine=libaio 00:57:08.097 direct=1 00:57:08.097 bs=4096 00:57:08.097 iodepth=1 00:57:08.097 norandommap=0 00:57:08.097 numjobs=1 00:57:08.097 00:57:08.097 verify_dump=1 00:57:08.097 verify_backlog=512 00:57:08.097 verify_state_save=0 00:57:08.097 do_verify=1 00:57:08.097 verify=crc32c-intel 00:57:08.097 [job0] 00:57:08.097 filename=/dev/nvme0n1 00:57:08.097 [job1] 00:57:08.097 filename=/dev/nvme0n2 00:57:08.097 [job2] 00:57:08.097 filename=/dev/nvme0n3 00:57:08.097 [job3] 00:57:08.097 filename=/dev/nvme0n4 00:57:08.355 Could not set queue depth (nvme0n1) 00:57:08.355 Could not set queue depth (nvme0n2) 00:57:08.355 Could not set queue depth (nvme0n3) 00:57:08.355 Could not set queue depth (nvme0n4) 00:57:08.355 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:08.355 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:08.355 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:08.355 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:08.355 fio-3.35 00:57:08.355 Starting 4 threads 00:57:09.731 00:57:09.731 job0: (groupid=0, jobs=1): err= 0: pid=93691: Mon Jul 22 10:52:17 2024 00:57:09.731 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:57:09.731 slat (nsec): min=9249, max=28249, avg=10798.42, stdev=1350.33 00:57:09.731 clat (usec): min=133, max=461, avg=202.48, stdev=61.54 00:57:09.731 lat (usec): min=143, max=473, avg=213.28, stdev=62.02 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:57:09.731 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:57:09.731 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 302], 00:57:09.731 | 99.00th=[ 338], 99.50th=[ 379], 99.90th=[ 416], 99.95th=[ 420], 00:57:09.731 | 99.99th=[ 461] 00:57:09.731 write: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(10.8MiB/1001msec); 0 zone resets 00:57:09.731 slat (usec): min=14, max=104, avg=17.87, stdev= 5.40 00:57:09.731 clat (usec): min=76, max=1394, avg=143.90, stdev=55.87 00:57:09.731 lat (usec): min=93, max=1411, avg=161.76, stdev=57.02 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 100], 00:57:09.731 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 113], 60.00th=[ 172], 00:57:09.731 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 215], 00:57:09.731 | 99.00th=[ 241], 99.50th=[ 258], 99.90th=[ 570], 99.95th=[ 734], 00:57:09.731 | 99.99th=[ 1401] 00:57:09.731 bw ( KiB/s): min= 8192, max= 8192, per=23.49%, avg=8192.00, stdev= 0.00, samples=1 00:57:09.731 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:57:09.731 lat (usec) : 100=10.74%, 250=73.13%, 500=16.08%, 750=0.04% 00:57:09.731 lat (msec) : 2=0.02% 00:57:09.731 cpu : usr=1.40%, sys=5.00%, ctx=5336, majf=0, minf=8 00:57:09.731 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:09.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.731 issued rwts: total=2560,2776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:09.731 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:09.731 job1: (groupid=0, jobs=1): err= 0: pid=93692: Mon Jul 22 10:52:17 2024 00:57:09.731 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:57:09.731 slat (nsec): min=9406, max=40645, avg=13906.96, stdev=4000.51 00:57:09.731 clat (usec): min=150, max=2161, avg=363.92, stdev=71.79 00:57:09.731 lat (usec): min=163, max=2174, avg=377.83, stdev=72.69 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 330], 00:57:09.731 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:57:09.731 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 424], 00:57:09.731 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 1795], 99.95th=[ 2147], 00:57:09.731 | 99.99th=[ 2147] 00:57:09.731 write: IOPS=1855, BW=7421KiB/s (7599kB/s)(7428KiB/1001msec); 0 zone resets 00:57:09.731 slat (usec): min=15, max=100, avg=22.15, stdev= 7.11 00:57:09.731 clat (usec): min=139, max=427, avg=201.63, stdev=26.52 00:57:09.731 lat (usec): min=162, max=443, avg=223.78, stdev=28.11 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:57:09.731 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 210], 00:57:09.731 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 245], 00:57:09.731 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 429], 00:57:09.731 | 99.99th=[ 429] 00:57:09.731 bw ( KiB/s): min= 8192, max= 8192, per=23.49%, avg=8192.00, stdev= 0.00, samples=1 00:57:09.731 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:57:09.731 lat (usec) : 250=53.14%, 500=46.71%, 750=0.03%, 1000=0.06% 00:57:09.731 lat (msec) : 2=0.03%, 4=0.03% 00:57:09.731 cpu : usr=0.70%, sys=4.70%, ctx=3394, majf=0, minf=17 00:57:09.731 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:09.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.731 issued rwts: total=1536,1857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:09.731 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:09.731 job2: (groupid=0, jobs=1): err= 0: pid=93693: Mon Jul 22 10:52:17 2024 00:57:09.731 read: IOPS=2021, BW=8088KiB/s (8282kB/s)(8096KiB/1001msec) 00:57:09.731 slat (nsec): min=7117, max=24752, avg=11650.93, stdev=2060.91 00:57:09.731 clat (usec): min=129, max=657, avg=284.75, stdev=86.18 00:57:09.731 lat (usec): min=141, max=665, avg=296.40, stdev=85.44 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 145], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 215], 00:57:09.731 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 322], 00:57:09.731 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 412], 00:57:09.731 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 498], 99.95th=[ 502], 00:57:09.731 | 99.99th=[ 660] 00:57:09.731 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:57:09.731 slat (usec): min=11, max=107, avg=17.39, stdev= 6.48 00:57:09.731 clat (usec): min=86, max=273, avg=176.23, stdev=35.76 00:57:09.731 lat (usec): min=102, max=290, avg=193.61, stdev=36.48 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 103], 5.00th=[ 113], 10.00th=[ 123], 20.00th=[ 137], 00:57:09.731 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 194], 00:57:09.731 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 227], 00:57:09.731 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 273], 00:57:09.731 | 99.99th=[ 273] 00:57:09.731 bw ( KiB/s): min=10528, max=10528, per=30.18%, avg=10528.00, stdev= 0.00, samples=1 00:57:09.731 iops : min= 2632, max= 2632, avg=2632.00, stdev= 0.00, samples=1 00:57:09.731 lat (usec) : 100=0.22%, 250=74.44%, 500=25.29%, 750=0.05% 00:57:09.731 cpu : usr=1.10%, sys=3.90%, ctx=4075, majf=0, minf=11 00:57:09.731 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:09.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.731 issued rwts: total=2024,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:09.731 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:09.731 job3: (groupid=0, jobs=1): err= 0: pid=93694: Mon Jul 22 10:52:17 2024 00:57:09.731 read: IOPS=1634, BW=6537KiB/s (6694kB/s)(6544KiB/1001msec) 00:57:09.731 slat (nsec): min=7150, max=31178, avg=12581.14, stdev=3500.82 00:57:09.731 clat (usec): min=179, max=7571, avg=335.71, stdev=322.71 00:57:09.731 lat (usec): min=192, max=7582, avg=348.29, stdev=322.87 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 225], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 273], 00:57:09.731 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 318], 00:57:09.731 | 70.00th=[ 351], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 416], 00:57:09.731 | 99.00th=[ 465], 99.50th=[ 603], 99.90th=[ 7373], 99.95th=[ 7570], 00:57:09.731 | 99.99th=[ 7570] 00:57:09.731 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:57:09.731 slat (nsec): min=11247, max=98026, avg=16366.99, stdev=7089.51 00:57:09.731 clat (usec): min=91, max=718, avg=191.93, stdev=28.85 00:57:09.731 lat (usec): min=104, max=740, avg=208.30, stdev=29.63 00:57:09.731 clat percentiles (usec): 00:57:09.731 | 1.00th=[ 125], 5.00th=[ 137], 10.00th=[ 155], 20.00th=[ 176], 00:57:09.731 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 200], 00:57:09.732 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:57:09.732 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 412], 00:57:09.732 | 99.99th=[ 717] 00:57:09.732 bw ( KiB/s): min= 8208, max= 8208, per=23.53%, avg=8208.00, stdev= 0.00, samples=1 00:57:09.732 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:57:09.732 lat (usec) : 100=0.05%, 250=57.52%, 500=42.16%, 750=0.08% 00:57:09.732 lat (msec) : 4=0.08%, 10=0.11% 00:57:09.732 cpu : usr=0.60%, sys=4.10%, ctx=3685, majf=0, minf=9 00:57:09.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:09.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:09.732 issued rwts: total=1636,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:09.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:09.732 00:57:09.732 Run status group 0 (all jobs): 00:57:09.732 READ: bw=30.3MiB/s (31.7MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.3MiB (31.8MB), run=1001-1001msec 00:57:09.732 WRITE: bw=34.1MiB/s (35.7MB/s), 7421KiB/s-10.8MiB/s (7599kB/s-11.4MB/s), io=34.1MiB (35.8MB), run=1001-1001msec 00:57:09.732 00:57:09.732 Disk stats (read/write): 00:57:09.732 nvme0n1: ios=2098/2435, merge=0/0, ticks=537/382, in_queue=919, util=92.78% 00:57:09.732 nvme0n2: ios=1422/1536, merge=0/0, ticks=540/335, in_queue=875, util=90.10% 00:57:09.732 nvme0n3: ios=1733/2048, merge=0/0, ticks=485/365, in_queue=850, util=90.16% 00:57:09.732 nvme0n4: ios=1536/1601, merge=0/0, ticks=505/316, in_queue=821, util=88.77% 00:57:09.732 10:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:57:09.732 [global] 00:57:09.732 thread=1 00:57:09.732 invalidate=1 00:57:09.732 rw=write 00:57:09.732 time_based=1 00:57:09.732 runtime=1 00:57:09.732 ioengine=libaio 00:57:09.732 direct=1 00:57:09.732 bs=4096 00:57:09.732 iodepth=128 00:57:09.732 norandommap=0 00:57:09.732 numjobs=1 00:57:09.732 00:57:09.732 verify_dump=1 00:57:09.732 verify_backlog=512 00:57:09.732 verify_state_save=0 00:57:09.732 do_verify=1 00:57:09.732 verify=crc32c-intel 00:57:09.732 [job0] 00:57:09.732 filename=/dev/nvme0n1 00:57:09.732 [job1] 00:57:09.732 filename=/dev/nvme0n2 00:57:09.732 [job2] 00:57:09.732 filename=/dev/nvme0n3 00:57:09.732 [job3] 00:57:09.732 filename=/dev/nvme0n4 00:57:09.732 Could not set queue depth (nvme0n1) 00:57:09.732 Could not set queue depth (nvme0n2) 00:57:09.732 Could not set queue depth (nvme0n3) 00:57:09.732 Could not set queue depth (nvme0n4) 00:57:09.732 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:09.732 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:09.732 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:09.732 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:09.732 fio-3.35 00:57:09.732 Starting 4 threads 00:57:11.108 00:57:11.108 job0: (groupid=0, jobs=1): err= 0: pid=93747: Mon Jul 22 10:52:18 2024 00:57:11.108 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:57:11.108 slat (usec): min=7, max=13976, avg=233.31, stdev=1207.86 00:57:11.108 clat (usec): min=13584, max=57133, avg=29571.16, stdev=10906.62 00:57:11.108 lat (usec): min=13624, max=57175, avg=29804.47, stdev=11028.72 00:57:11.108 clat percentiles (usec): 00:57:11.108 | 1.00th=[14615], 5.00th=[16909], 10.00th=[17171], 20.00th=[18482], 00:57:11.108 | 30.00th=[19792], 40.00th=[21365], 50.00th=[26608], 60.00th=[36963], 00:57:11.108 | 70.00th=[39060], 80.00th=[41157], 90.00th=[43254], 95.00th=[44303], 00:57:11.108 | 99.00th=[45876], 99.50th=[49021], 99.90th=[53740], 99.95th=[54264], 00:57:11.108 | 99.99th=[56886] 00:57:11.108 write: IOPS=2420, BW=9682KiB/s (9914kB/s)(9740KiB/1006msec); 0 zone resets 00:57:11.108 slat (usec): min=10, max=11150, avg=204.58, stdev=897.44 00:57:11.108 clat (usec): min=2006, max=52779, avg=27189.77, stdev=9116.83 00:57:11.108 lat (usec): min=5923, max=52823, avg=27394.35, stdev=9181.42 00:57:11.108 clat percentiles (usec): 00:57:11.108 | 1.00th=[11731], 5.00th=[15008], 10.00th=[16057], 20.00th=[17695], 00:57:11.108 | 30.00th=[19792], 40.00th=[21890], 50.00th=[27657], 60.00th=[31065], 00:57:11.108 | 70.00th=[33162], 80.00th=[36439], 90.00th=[39060], 95.00th=[41157], 00:57:11.108 | 99.00th=[45876], 99.50th=[46400], 99.90th=[52691], 99.95th=[52691], 00:57:11.108 | 99.99th=[52691] 00:57:11.108 bw ( KiB/s): min= 6155, max=12312, per=19.26%, avg=9233.50, stdev=4353.66, samples=2 00:57:11.108 iops : min= 1538, max= 3078, avg=2308.00, stdev=1088.94, samples=2 00:57:11.108 lat (msec) : 4=0.02%, 10=0.18%, 20=31.43%, 50=68.10%, 100=0.27% 00:57:11.108 cpu : usr=2.29%, sys=10.15%, ctx=605, majf=0, minf=11 00:57:11.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:57:11.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:11.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:11.108 issued rwts: total=2048,2435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:11.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:11.108 job1: (groupid=0, jobs=1): err= 0: pid=93748: Mon Jul 22 10:52:18 2024 00:57:11.108 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:57:11.108 slat (usec): min=7, max=9127, avg=144.97, stdev=576.05 00:57:11.108 clat (usec): min=14115, max=38856, avg=18835.26, stdev=2852.74 00:57:11.108 lat (usec): min=14868, max=38877, avg=18980.23, stdev=2864.82 00:57:11.108 clat percentiles (usec): 00:57:11.108 | 1.00th=[15008], 5.00th=[15664], 10.00th=[16188], 20.00th=[17171], 00:57:11.108 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18482], 60.00th=[18744], 00:57:11.108 | 70.00th=[19268], 80.00th=[20055], 90.00th=[21103], 95.00th=[21890], 00:57:11.108 | 99.00th=[36963], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:57:11.108 | 99.99th=[39060] 00:57:11.108 write: IOPS=3468, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1002msec); 0 zone resets 00:57:11.108 slat (usec): min=20, max=16523, avg=149.19, stdev=727.80 00:57:11.108 clat (usec): min=460, max=48335, avg=19701.95, stdev=7659.77 00:57:11.108 lat (usec): min=3272, max=48378, avg=19851.14, stdev=7690.12 00:57:11.108 clat percentiles (usec): 00:57:11.108 | 1.00th=[ 4621], 5.00th=[14091], 10.00th=[14877], 20.00th=[15401], 00:57:11.108 | 30.00th=[16188], 40.00th=[16909], 50.00th=[17957], 60.00th=[18220], 00:57:11.108 | 70.00th=[18744], 80.00th=[20055], 90.00th=[33424], 95.00th=[38536], 00:57:11.108 | 99.00th=[47449], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:57:11.108 | 99.99th=[48497] 00:57:11.108 bw ( KiB/s): min=12288, max=12288, per=25.64%, avg=12288.00, stdev= 0.00, samples=1 00:57:11.108 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:57:11.108 lat (usec) : 500=0.02% 00:57:11.108 lat (msec) : 4=0.27%, 10=0.79%, 20=79.20%, 50=19.72% 00:57:11.108 cpu : usr=3.50%, sys=13.69%, ctx=562, majf=0, minf=15 00:57:11.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:57:11.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:11.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:11.108 issued rwts: total=3072,3475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:11.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:11.108 job2: (groupid=0, jobs=1): err= 0: pid=93749: Mon Jul 22 10:52:18 2024 00:57:11.108 read: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1003msec) 00:57:11.108 slat (usec): min=3, max=6262, avg=131.83, stdev=566.53 00:57:11.108 clat (usec): min=2042, max=32695, avg=16760.76, stdev=2792.27 00:57:11.108 lat (usec): min=2062, max=32716, avg=16892.59, stdev=2774.63 00:57:11.109 clat percentiles (usec): 00:57:11.109 | 1.00th=[ 6390], 5.00th=[13435], 10.00th=[14353], 20.00th=[15795], 00:57:11.109 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:57:11.109 | 70.00th=[17433], 80.00th=[17695], 90.00th=[19268], 95.00th=[20579], 00:57:11.109 | 99.00th=[23725], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:57:11.109 | 99.99th=[32637] 00:57:11.109 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:57:11.109 slat (usec): min=9, max=16644, avg=140.56, stdev=636.02 00:57:11.109 clat (usec): min=11386, max=50837, avg=18963.65, stdev=7915.21 00:57:11.109 lat (usec): min=11450, max=50873, avg=19104.21, stdev=7952.63 00:57:11.109 clat percentiles (usec): 00:57:11.109 | 1.00th=[12649], 5.00th=[13173], 10.00th=[14091], 20.00th=[14877], 00:57:11.109 | 30.00th=[15270], 40.00th=[15533], 50.00th=[16057], 60.00th=[16909], 00:57:11.109 | 70.00th=[17957], 80.00th=[18744], 90.00th=[33424], 95.00th=[38536], 00:57:11.109 | 99.00th=[48497], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:57:11.109 | 99.99th=[50594] 00:57:11.109 bw ( KiB/s): min=12288, max=16384, per=29.91%, avg=14336.00, stdev=2896.31, samples=2 00:57:11.109 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:57:11.109 lat (msec) : 4=0.36%, 10=0.45%, 20=88.34%, 50=10.50%, 100=0.36% 00:57:11.109 cpu : usr=4.19%, sys=14.37%, ctx=606, majf=0, minf=7 00:57:11.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:57:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:11.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:11.109 issued rwts: total=3456,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:11.109 job3: (groupid=0, jobs=1): err= 0: pid=93750: Mon Jul 22 10:52:18 2024 00:57:11.109 read: IOPS=2133, BW=8533KiB/s (8738kB/s)(8584KiB/1006msec) 00:57:11.109 slat (usec): min=7, max=13076, avg=231.86, stdev=1100.04 00:57:11.109 clat (usec): min=2123, max=55416, avg=28588.43, stdev=11265.35 00:57:11.109 lat (usec): min=13164, max=59649, avg=28820.29, stdev=11371.42 00:57:11.109 clat percentiles (usec): 00:57:11.109 | 1.00th=[13698], 5.00th=[15664], 10.00th=[16450], 20.00th=[17171], 00:57:11.109 | 30.00th=[17695], 40.00th=[20055], 50.00th=[26608], 60.00th=[35914], 00:57:11.109 | 70.00th=[39060], 80.00th=[39584], 90.00th=[42730], 95.00th=[45351], 00:57:11.109 | 99.00th=[51119], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:57:11.109 | 99.99th=[55313] 00:57:11.109 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:57:11.109 slat (usec): min=9, max=12137, avg=186.47, stdev=851.22 00:57:11.109 clat (usec): min=12319, max=53573, avg=25621.85, stdev=10086.48 00:57:11.109 lat (usec): min=12353, max=59530, avg=25808.32, stdev=10168.50 00:57:11.109 clat percentiles (usec): 00:57:11.109 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13829], 20.00th=[15795], 00:57:11.109 | 30.00th=[16909], 40.00th=[18220], 50.00th=[22938], 60.00th=[29754], 00:57:11.109 | 70.00th=[32900], 80.00th=[35390], 90.00th=[39060], 95.00th=[41681], 00:57:11.109 | 99.00th=[47973], 99.50th=[49546], 99.90th=[53740], 99.95th=[53740], 00:57:11.109 | 99.99th=[53740] 00:57:11.109 bw ( KiB/s): min= 7936, max=12312, per=21.12%, avg=10124.00, stdev=3094.30, samples=2 00:57:11.109 iops : min= 1984, max= 3078, avg=2531.00, stdev=773.57, samples=2 00:57:11.109 lat (msec) : 4=0.02%, 20=42.65%, 50=56.57%, 100=0.76% 00:57:11.109 cpu : usr=3.58%, sys=9.65%, ctx=619, majf=0, minf=17 00:57:11.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:57:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:11.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:11.109 issued rwts: total=2146,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:11.109 00:57:11.109 Run status group 0 (all jobs): 00:57:11.109 READ: bw=41.6MiB/s (43.7MB/s), 8143KiB/s-13.5MiB/s (8339kB/s-14.1MB/s), io=41.9MiB (43.9MB), run=1002-1006msec 00:57:11.109 WRITE: bw=46.8MiB/s (49.1MB/s), 9682KiB/s-14.0MiB/s (9914kB/s-14.6MB/s), io=47.1MiB (49.4MB), run=1002-1006msec 00:57:11.109 00:57:11.109 Disk stats (read/write): 00:57:11.109 nvme0n1: ios=2008/2048, merge=0/0, ticks=19122/17301, in_queue=36423, util=89.67% 00:57:11.109 nvme0n2: ios=2609/3048, merge=0/0, ticks=11714/12721, in_queue=24435, util=89.61% 00:57:11.109 nvme0n3: ios=2941/3072, merge=0/0, ticks=12198/12388, in_queue=24586, util=88.84% 00:57:11.109 nvme0n4: ios=2048/2183, merge=0/0, ticks=18213/13586, in_queue=31799, util=89.40% 00:57:11.109 10:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:57:11.109 [global] 00:57:11.109 thread=1 00:57:11.109 invalidate=1 00:57:11.109 rw=randwrite 00:57:11.109 time_based=1 00:57:11.109 runtime=1 00:57:11.109 ioengine=libaio 00:57:11.109 direct=1 00:57:11.109 bs=4096 00:57:11.109 iodepth=128 00:57:11.109 norandommap=0 00:57:11.109 numjobs=1 00:57:11.109 00:57:11.109 verify_dump=1 00:57:11.109 verify_backlog=512 00:57:11.109 verify_state_save=0 00:57:11.109 do_verify=1 00:57:11.109 verify=crc32c-intel 00:57:11.109 [job0] 00:57:11.109 filename=/dev/nvme0n1 00:57:11.109 [job1] 00:57:11.109 filename=/dev/nvme0n2 00:57:11.109 [job2] 00:57:11.109 filename=/dev/nvme0n3 00:57:11.109 [job3] 00:57:11.109 filename=/dev/nvme0n4 00:57:11.109 Could not set queue depth (nvme0n1) 00:57:11.109 Could not set queue depth (nvme0n2) 00:57:11.109 Could not set queue depth (nvme0n3) 00:57:11.109 Could not set queue depth (nvme0n4) 00:57:11.109 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:11.109 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:11.109 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:11.109 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:11.109 fio-3.35 00:57:11.109 Starting 4 threads 00:57:12.485 00:57:12.485 job0: (groupid=0, jobs=1): err= 0: pid=93814: Mon Jul 22 10:52:20 2024 00:57:12.485 read: IOPS=5830, BW=22.8MiB/s (23.9MB/s)(22.8MiB/1002msec) 00:57:12.485 slat (usec): min=10, max=2913, avg=80.35, stdev=302.23 00:57:12.485 clat (usec): min=805, max=13580, avg=10829.60, stdev=1115.28 00:57:12.485 lat (usec): min=824, max=14926, avg=10909.95, stdev=1104.87 00:57:12.485 clat percentiles (usec): 00:57:12.485 | 1.00th=[ 6652], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:57:12.485 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:57:12.485 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12125], 00:57:12.485 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13435], 99.95th=[13435], 00:57:12.485 | 99.99th=[13566] 00:57:12.485 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:57:12.485 slat (usec): min=21, max=2316, avg=75.14, stdev=226.25 00:57:12.485 clat (usec): min=7952, max=12790, avg=10311.57, stdev=852.65 00:57:12.485 lat (usec): min=7983, max=12824, avg=10386.72, stdev=859.97 00:57:12.485 clat percentiles (usec): 00:57:12.485 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9634], 00:57:12.485 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:57:12.485 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11863], 00:57:12.485 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12780], 99.95th=[12780], 00:57:12.485 | 99.99th=[12780] 00:57:12.485 bw ( KiB/s): min=24576, max=24576, per=59.75%, avg=24576.00, stdev= 0.00, samples=1 00:57:12.485 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:57:12.485 lat (usec) : 1000=0.06% 00:57:12.485 lat (msec) : 4=0.08%, 10=27.02%, 20=72.84% 00:57:12.485 cpu : usr=5.59%, sys=25.87%, ctx=888, majf=0, minf=11 00:57:12.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:57:12.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:12.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:12.485 issued rwts: total=5842,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:12.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:12.485 job1: (groupid=0, jobs=1): err= 0: pid=93815: Mon Jul 22 10:52:20 2024 00:57:12.485 read: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec) 00:57:12.486 slat (usec): min=7, max=11402, avg=409.15, stdev=1461.27 00:57:12.486 clat (usec): min=33984, max=69869, avg=49853.69, stdev=5145.22 00:57:12.486 lat (usec): min=35093, max=69931, avg=50262.83, stdev=5072.28 00:57:12.486 clat percentiles (usec): 00:57:12.486 | 1.00th=[38536], 5.00th=[41157], 10.00th=[44303], 20.00th=[45876], 00:57:12.486 | 30.00th=[47449], 40.00th=[49021], 50.00th=[50070], 60.00th=[51119], 00:57:12.486 | 70.00th=[52167], 80.00th=[53216], 90.00th=[54789], 95.00th=[58459], 00:57:12.486 | 99.00th=[65274], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:57:12.486 | 99.99th=[69731] 00:57:12.486 write: IOPS=1343, BW=5376KiB/s (5505kB/s)(5392KiB/1003msec); 0 zone resets 00:57:12.486 slat (usec): min=9, max=13311, avg=420.20, stdev=1572.43 00:57:12.486 clat (msec): min=2, max=105, avg=54.66, stdev=22.54 00:57:12.486 lat (msec): min=2, max=105, avg=55.08, stdev=22.64 00:57:12.486 clat percentiles (msec): 00:57:12.486 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 39], 00:57:12.486 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 47], 60.00th=[ 56], 00:57:12.486 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 88], 95.00th=[ 96], 00:57:12.486 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 106], 99.95th=[ 106], 00:57:12.486 | 99.99th=[ 106] 00:57:12.486 bw ( KiB/s): min= 4769, max= 5008, per=11.88%, avg=4888.50, stdev=169.00, samples=2 00:57:12.486 iops : min= 1192, max= 1252, avg=1222.00, stdev=42.43, samples=2 00:57:12.486 lat (msec) : 4=1.26%, 20=1.85%, 50=48.99%, 100=46.50%, 250=1.39% 00:57:12.486 cpu : usr=2.10%, sys=4.49%, ctx=405, majf=0, minf=7 00:57:12.486 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.3% 00:57:12.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:12.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:12.486 issued rwts: total=1024,1348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:12.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:12.486 job2: (groupid=0, jobs=1): err= 0: pid=93816: Mon Jul 22 10:52:20 2024 00:57:12.486 read: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec) 00:57:12.486 slat (usec): min=6, max=15670, avg=409.84, stdev=1599.95 00:57:12.486 clat (usec): min=32478, max=67078, avg=50174.90, stdev=4866.51 00:57:12.486 lat (usec): min=38203, max=68040, avg=50584.75, stdev=4741.42 00:57:12.486 clat percentiles (usec): 00:57:12.486 | 1.00th=[38536], 5.00th=[42730], 10.00th=[45351], 20.00th=[46400], 00:57:12.486 | 30.00th=[47449], 40.00th=[49546], 50.00th=[50070], 60.00th=[51119], 00:57:12.486 | 70.00th=[51643], 80.00th=[53216], 90.00th=[55313], 95.00th=[59507], 00:57:12.486 | 99.00th=[65274], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:57:12.486 | 99.99th=[66847] 00:57:12.486 write: IOPS=1324, BW=5297KiB/s (5425kB/s)(5308KiB/1002msec); 0 zone resets 00:57:12.486 slat (usec): min=9, max=14393, avg=428.05, stdev=1531.71 00:57:12.486 clat (usec): min=800, max=105091, avg=55182.61, stdev=22577.49 00:57:12.486 lat (msec): min=2, max=105, avg=55.61, stdev=22.67 00:57:12.486 clat percentiles (msec): 00:57:12.486 | 1.00th=[ 3], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 40], 00:57:12.486 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 57], 00:57:12.486 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 95], 00:57:12.486 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 106], 99.95th=[ 106], 00:57:12.486 | 99.99th=[ 106] 00:57:12.486 bw ( KiB/s): min= 4720, max= 4870, per=11.66%, avg=4795.00, stdev=106.07, samples=2 00:57:12.486 iops : min= 1180, max= 1217, avg=1198.50, stdev=26.16, samples=2 00:57:12.486 lat (usec) : 1000=0.04% 00:57:12.486 lat (msec) : 4=1.36%, 20=1.36%, 50=46.70%, 100=48.92%, 250=1.62% 00:57:12.486 cpu : usr=1.40%, sys=5.49%, ctx=338, majf=0, minf=19 00:57:12.486 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:57:12.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:12.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:12.486 issued rwts: total=1024,1327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:12.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:12.486 job3: (groupid=0, jobs=1): err= 0: pid=93817: Mon Jul 22 10:52:20 2024 00:57:12.486 read: IOPS=1144, BW=4580KiB/s (4690kB/s)(4612KiB/1007msec) 00:57:12.486 slat (usec): min=9, max=18890, avg=411.42, stdev=2065.30 00:57:12.486 clat (usec): min=1493, max=116554, avg=45787.12, stdev=19354.80 00:57:12.486 lat (msec): min=14, max=116, avg=46.20, stdev=19.40 00:57:12.486 clat percentiles (msec): 00:57:12.486 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 35], 00:57:12.486 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 38], 00:57:12.486 | 70.00th=[ 47], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 82], 00:57:12.486 | 99.00th=[ 105], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 117], 00:57:12.486 | 99.99th=[ 117] 00:57:12.486 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:57:12.486 slat (usec): min=33, max=20571, avg=334.53, stdev=1809.82 00:57:12.486 clat (msec): min=19, max=102, avg=47.60, stdev=17.14 00:57:12.486 lat (msec): min=23, max=102, avg=47.93, stdev=17.13 00:57:12.486 clat percentiles (msec): 00:57:12.486 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 27], 00:57:12.486 | 30.00th=[ 32], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 53], 00:57:12.486 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 67], 95.00th=[ 72], 00:57:12.486 | 99.00th=[ 103], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 103], 00:57:12.486 | 99.99th=[ 103] 00:57:12.486 bw ( KiB/s): min= 5234, max= 7064, per=14.95%, avg=6149.00, stdev=1294.01, samples=2 00:57:12.486 iops : min= 1308, max= 1766, avg=1537.00, stdev=323.85, samples=2 00:57:12.486 lat (msec) : 2=0.04%, 20=0.78%, 50=52.88%, 100=44.40%, 250=1.90% 00:57:12.486 cpu : usr=1.59%, sys=5.86%, ctx=114, majf=0, minf=13 00:57:12.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:57:12.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:12.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:12.486 issued rwts: total=1153,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:12.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:12.486 00:57:12.486 Run status group 0 (all jobs): 00:57:12.486 READ: bw=35.1MiB/s (36.8MB/s), 4084KiB/s-22.8MiB/s (4182kB/s-23.9MB/s), io=35.3MiB (37.0MB), run=1002-1007msec 00:57:12.486 WRITE: bw=40.2MiB/s (42.1MB/s), 5297KiB/s-24.0MiB/s (5425kB/s-25.1MB/s), io=40.4MiB (42.4MB), run=1002-1007msec 00:57:12.486 00:57:12.486 Disk stats (read/write): 00:57:12.486 nvme0n1: ios=5170/5329, merge=0/0, ticks=13762/11507, in_queue=25269, util=89.87% 00:57:12.486 nvme0n2: ios=944/1024, merge=0/0, ticks=11207/14915, in_queue=26122, util=89.00% 00:57:12.486 nvme0n3: ios=924/1024, merge=0/0, ticks=11052/15208, in_queue=26260, util=90.28% 00:57:12.486 nvme0n4: ios=1049/1452, merge=0/0, ticks=11248/14712, in_queue=25960, util=90.33% 00:57:12.486 10:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:57:12.486 10:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93831 00:57:12.486 10:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:57:12.486 10:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:57:12.486 [global] 00:57:12.486 thread=1 00:57:12.486 invalidate=1 00:57:12.486 rw=read 00:57:12.486 time_based=1 00:57:12.486 runtime=10 00:57:12.486 ioengine=libaio 00:57:12.486 direct=1 00:57:12.486 bs=4096 00:57:12.486 iodepth=1 00:57:12.486 norandommap=1 00:57:12.486 numjobs=1 00:57:12.486 00:57:12.486 [job0] 00:57:12.486 filename=/dev/nvme0n1 00:57:12.486 [job1] 00:57:12.486 filename=/dev/nvme0n2 00:57:12.486 [job2] 00:57:12.486 filename=/dev/nvme0n3 00:57:12.486 [job3] 00:57:12.486 filename=/dev/nvme0n4 00:57:12.486 Could not set queue depth (nvme0n1) 00:57:12.486 Could not set queue depth (nvme0n2) 00:57:12.486 Could not set queue depth (nvme0n3) 00:57:12.486 Could not set queue depth (nvme0n4) 00:57:12.744 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:12.744 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:12.744 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:12.744 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:12.744 fio-3.35 00:57:12.744 Starting 4 threads 00:57:16.020 10:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:57:16.020 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=29949952, buflen=4096 00:57:16.020 fio: pid=93874, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:57:16.020 10:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:57:16.020 fio: pid=93873, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:57:16.020 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=44879872, buflen=4096 00:57:16.020 10:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:16.020 10:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:57:16.020 fio: pid=93871, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:57:16.020 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35590144, buflen=4096 00:57:16.020 10:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:16.020 10:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:57:16.278 fio: pid=93872, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:57:16.278 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55640064, buflen=4096 00:57:16.278 00:57:16.278 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93871: Mon Jul 22 10:52:24 2024 00:57:16.278 read: IOPS=2705, BW=10.6MiB/s (11.1MB/s)(33.9MiB/3212msec) 00:57:16.278 slat (usec): min=5, max=16202, avg=33.15, stdev=249.30 00:57:16.278 clat (usec): min=103, max=4379, avg=334.14, stdev=132.17 00:57:16.278 lat (usec): min=114, max=16528, avg=367.29, stdev=282.50 00:57:16.278 clat percentiles (usec): 00:57:16.278 | 1.00th=[ 125], 5.00th=[ 190], 10.00th=[ 235], 20.00th=[ 297], 00:57:16.278 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 355], 00:57:16.278 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 404], 00:57:16.278 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 2573], 99.95th=[ 4047], 00:57:16.278 | 99.99th=[ 4359] 00:57:16.278 bw ( KiB/s): min=10216, max=11153, per=22.38%, avg=10584.50, stdev=335.63, samples=6 00:57:16.278 iops : min= 2554, max= 2788, avg=2646.00, stdev=83.77, samples=6 00:57:16.278 lat (usec) : 250=11.65%, 500=88.15%, 750=0.03% 00:57:16.278 lat (msec) : 2=0.06%, 4=0.05%, 10=0.06% 00:57:16.278 cpu : usr=1.34%, sys=6.57%, ctx=8705, majf=0, minf=1 00:57:16.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:16.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 issued rwts: total=8690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:16.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:16.278 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93872: Mon Jul 22 10:52:24 2024 00:57:16.278 read: IOPS=3961, BW=15.5MiB/s (16.2MB/s)(53.1MiB/3429msec) 00:57:16.278 slat (usec): min=6, max=11323, avg=14.78, stdev=198.51 00:57:16.278 clat (usec): min=2, max=2787, avg=236.88, stdev=81.02 00:57:16.278 lat (usec): min=108, max=11486, avg=251.66, stdev=213.43 00:57:16.278 clat percentiles (usec): 00:57:16.278 | 1.00th=[ 114], 5.00th=[ 123], 10.00th=[ 135], 20.00th=[ 161], 00:57:16.278 | 30.00th=[ 190], 40.00th=[ 215], 50.00th=[ 243], 60.00th=[ 269], 00:57:16.278 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 343], 00:57:16.278 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 519], 99.95th=[ 988], 00:57:16.278 | 99.99th=[ 1942] 00:57:16.278 bw ( KiB/s): min=13096, max=19888, per=31.86%, avg=15066.17, stdev=2581.92, samples=6 00:57:16.278 iops : min= 3274, max= 4972, avg=3766.50, stdev=645.49, samples=6 00:57:16.278 lat (usec) : 4=0.01%, 100=0.02%, 250=52.73%, 500=47.10%, 750=0.07% 00:57:16.278 lat (usec) : 1000=0.01% 00:57:16.278 lat (msec) : 2=0.04%, 4=0.01% 00:57:16.278 cpu : usr=0.70%, sys=3.21%, ctx=13609, majf=0, minf=1 00:57:16.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:16.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 issued rwts: total=13585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:16.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:16.278 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93873: Mon Jul 22 10:52:24 2024 00:57:16.278 read: IOPS=3616, BW=14.1MiB/s (14.8MB/s)(42.8MiB/3030msec) 00:57:16.278 slat (usec): min=5, max=7775, avg=11.48, stdev=88.17 00:57:16.278 clat (usec): min=44, max=7693, avg=264.26, stdev=156.40 00:57:16.278 lat (usec): min=134, max=8027, avg=275.75, stdev=179.40 00:57:16.278 clat percentiles (usec): 00:57:16.278 | 1.00th=[ 153], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 204], 00:57:16.278 | 30.00th=[ 227], 40.00th=[ 249], 50.00th=[ 269], 60.00th=[ 281], 00:57:16.278 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 347], 00:57:16.278 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 1713], 99.95th=[ 2311], 00:57:16.278 | 99.99th=[ 7373] 00:57:16.278 bw ( KiB/s): min=13104, max=18624, per=30.83%, avg=14579.20, stdev=2291.50, samples=5 00:57:16.278 iops : min= 3276, max= 4656, avg=3644.80, stdev=572.87, samples=5 00:57:16.278 lat (usec) : 50=0.01%, 250=40.43%, 500=59.35%, 750=0.09%, 1000=0.01% 00:57:16.278 lat (msec) : 2=0.05%, 4=0.02%, 10=0.04% 00:57:16.278 cpu : usr=0.56%, sys=3.10%, ctx=10968, majf=0, minf=1 00:57:16.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:16.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 issued rwts: total=10958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:16.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:16.278 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93874: Mon Jul 22 10:52:24 2024 00:57:16.278 read: IOPS=2582, BW=10.1MiB/s (10.6MB/s)(28.6MiB/2832msec) 00:57:16.278 slat (usec): min=19, max=110, avg=31.50, stdev= 4.66 00:57:16.278 clat (usec): min=164, max=5440, avg=352.26, stdev=88.26 00:57:16.278 lat (usec): min=192, max=5471, avg=383.75, stdev=88.53 00:57:16.278 clat percentiles (usec): 00:57:16.278 | 1.00th=[ 249], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 326], 00:57:16.278 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:57:16.278 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 404], 00:57:16.278 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 996], 99.95th=[ 1827], 00:57:16.278 | 99.99th=[ 5473] 00:57:16.278 bw ( KiB/s): min=10168, max=10618, per=22.00%, avg=10403.60, stdev=162.61, samples=5 00:57:16.278 iops : min= 2542, max= 2654, avg=2600.80, stdev=40.49, samples=5 00:57:16.278 lat (usec) : 250=1.05%, 500=98.69%, 750=0.11%, 1000=0.04% 00:57:16.278 lat (msec) : 2=0.05%, 4=0.03%, 10=0.01% 00:57:16.278 cpu : usr=1.59%, sys=6.96%, ctx=7335, majf=0, minf=2 00:57:16.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:16.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:16.278 issued rwts: total=7313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:16.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:16.278 00:57:16.279 Run status group 0 (all jobs): 00:57:16.279 READ: bw=46.2MiB/s (48.4MB/s), 10.1MiB/s-15.5MiB/s (10.6MB/s-16.2MB/s), io=158MiB (166MB), run=2832-3429msec 00:57:16.279 00:57:16.279 Disk stats (read/write): 00:57:16.279 nvme0n1: ios=8272/0, merge=0/0, ticks=2820/0, in_queue=2820, util=94.85% 00:57:16.279 nvme0n2: ios=13203/0, merge=0/0, ticks=3126/0, in_queue=3126, util=95.30% 00:57:16.279 nvme0n3: ios=10432/0, merge=0/0, ticks=2735/0, in_queue=2735, util=96.20% 00:57:16.279 nvme0n4: ios=6808/0, merge=0/0, ticks=2395/0, in_queue=2395, util=96.51% 00:57:16.279 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:16.279 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:57:16.536 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:16.536 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:57:16.793 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:16.793 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:57:17.050 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:17.050 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:57:17.050 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:57:17.050 10:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93831 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:17.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:17.308 nvmf hotplug test: fio failed as expected 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:57:17.308 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:57:17.565 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:57:17.565 rmmod nvme_tcp 00:57:17.565 rmmod nvme_fabrics 00:57:17.565 rmmod nvme_keyring 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 93348 ']' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 93348 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 93348 ']' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 93348 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93348 00:57:17.824 killing process with pid 93348 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93348' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 93348 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 93348 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:17.824 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:18.081 10:52:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:57:18.081 ************************************ 00:57:18.081 END TEST nvmf_fio_target 00:57:18.081 ************************************ 00:57:18.081 00:57:18.081 real 0m18.275s 00:57:18.081 user 1m10.055s 00:57:18.081 sys 0m7.554s 00:57:18.081 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:57:18.081 10:52:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:18.081 10:52:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:57:18.081 10:52:25 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:57:18.081 10:52:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:57:18.081 10:52:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:57:18.081 10:52:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:57:18.081 ************************************ 00:57:18.081 START TEST nvmf_bdevio 00:57:18.081 ************************************ 00:57:18.081 10:52:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:57:18.081 * Looking for test storage... 00:57:18.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:18.081 10:52:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:57:18.355 Cannot find device "nvmf_tgt_br" 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:57:18.355 Cannot find device "nvmf_tgt_br2" 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:57:18.355 Cannot find device "nvmf_tgt_br" 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:57:18.355 Cannot find device "nvmf_tgt_br2" 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:18.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:18.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:18.355 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:57:18.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:18.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:57:18.614 00:57:18.614 --- 10.0.0.2 ping statistics --- 00:57:18.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:18.614 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:57:18.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:18.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:57:18.614 00:57:18.614 --- 10.0.0.3 ping statistics --- 00:57:18.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:18.614 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:57:18.614 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:18.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:18.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:57:18.615 00:57:18.615 --- 10.0.0.1 ping statistics --- 00:57:18.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:18.615 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:57:18.615 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=94192 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 94192 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 94192 ']' 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:18.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:57:18.873 10:52:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:18.873 [2024-07-22 10:52:26.607149] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:57:18.873 [2024-07-22 10:52:26.607220] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:18.873 [2024-07-22 10:52:26.727513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:57:18.873 [2024-07-22 10:52:26.751161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:19.133 [2024-07-22 10:52:26.818136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:19.133 [2024-07-22 10:52:26.818624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:19.133 [2024-07-22 10:52:26.818950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:19.133 [2024-07-22 10:52:26.819164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:19.133 [2024-07-22 10:52:26.819430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:19.133 [2024-07-22 10:52:26.819787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:57:19.133 [2024-07-22 10:52:26.819972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:57:19.133 [2024-07-22 10:52:26.820145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:57:19.133 [2024-07-22 10:52:26.820146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:19.702 [2024-07-22 10:52:27.506339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:19.702 Malloc0 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:19.702 [2024-07-22 10:52:27.597117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:57:19.702 { 00:57:19.702 "params": { 00:57:19.702 "name": "Nvme$subsystem", 00:57:19.702 "trtype": "$TEST_TRANSPORT", 00:57:19.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:19.702 "adrfam": "ipv4", 00:57:19.702 "trsvcid": "$NVMF_PORT", 00:57:19.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:19.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:19.702 "hdgst": ${hdgst:-false}, 00:57:19.702 "ddgst": ${ddgst:-false} 00:57:19.702 }, 00:57:19.702 "method": "bdev_nvme_attach_controller" 00:57:19.702 } 00:57:19.702 EOF 00:57:19.702 )") 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:57:19.702 10:52:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:57:19.702 "params": { 00:57:19.702 "name": "Nvme1", 00:57:19.702 "trtype": "tcp", 00:57:19.702 "traddr": "10.0.0.2", 00:57:19.702 "adrfam": "ipv4", 00:57:19.702 "trsvcid": "4420", 00:57:19.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:19.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:57:19.702 "hdgst": false, 00:57:19.702 "ddgst": false 00:57:19.702 }, 00:57:19.702 "method": "bdev_nvme_attach_controller" 00:57:19.702 }' 00:57:19.961 [2024-07-22 10:52:27.653148] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:57:19.961 [2024-07-22 10:52:27.653210] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94246 ] 00:57:19.961 [2024-07-22 10:52:27.771670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:57:19.961 [2024-07-22 10:52:27.797386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:57:19.961 [2024-07-22 10:52:27.838991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:57:19.961 [2024-07-22 10:52:27.839152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:57:19.961 [2024-07-22 10:52:27.839153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:57:20.219 I/O targets: 00:57:20.219 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:57:20.219 00:57:20.219 00:57:20.219 CUnit - A unit testing framework for C - Version 2.1-3 00:57:20.219 http://cunit.sourceforge.net/ 00:57:20.219 00:57:20.219 00:57:20.219 Suite: bdevio tests on: Nvme1n1 00:57:20.219 Test: blockdev write read block ...passed 00:57:20.219 Test: blockdev write zeroes read block ...passed 00:57:20.219 Test: blockdev write zeroes read no split ...passed 00:57:20.219 Test: blockdev write zeroes read split ...passed 00:57:20.219 Test: blockdev write zeroes read split partial ...passed 00:57:20.219 Test: blockdev reset ...[2024-07-22 10:52:28.112712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:57:20.219 [2024-07-22 10:52:28.112921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037a50 (9): Bad file descriptor 00:57:20.219 [2024-07-22 10:52:28.124565] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:57:20.219 passed 00:57:20.219 Test: blockdev write read 8 blocks ...passed 00:57:20.219 Test: blockdev write read size > 128k ...passed 00:57:20.219 Test: blockdev write read invalid size ...passed 00:57:20.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:57:20.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:57:20.477 Test: blockdev write read max offset ...passed 00:57:20.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:57:20.477 Test: blockdev writev readv 8 blocks ...passed 00:57:20.477 Test: blockdev writev readv 30 x 1block ...passed 00:57:20.477 Test: blockdev writev readv block ...passed 00:57:20.477 Test: blockdev writev readv size > 128k ...passed 00:57:20.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:57:20.477 Test: blockdev comparev and writev ...[2024-07-22 10:52:28.299325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.299470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.299494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.299503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.299836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.299847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.299860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.299870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.300147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.300158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.300172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.300181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.300459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.300471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.300484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:57:20.477 [2024-07-22 10:52:28.300494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:57:20.477 passed 00:57:20.477 Test: blockdev nvme passthru rw ...passed 00:57:20.477 Test: blockdev nvme passthru vendor specific ...[2024-07-22 10:52:28.384823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:57:20.477 [2024-07-22 10:52:28.384846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.384945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:57:20.477 [2024-07-22 10:52:28.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:57:20.477 [2024-07-22 10:52:28.385049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:57:20.477 [2024-07-22 10:52:28.385059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:57:20.477 passed 00:57:20.477 Test: blockdev nvme admin passthru ...[2024-07-22 10:52:28.385154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:57:20.477 [2024-07-22 10:52:28.385164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:57:20.477 passed 00:57:20.749 Test: blockdev copy ...passed 00:57:20.749 00:57:20.749 Run Summary: Type Total Ran Passed Failed Inactive 00:57:20.749 suites 1 1 n/a 0 0 00:57:20.749 tests 23 23 23 0 0 00:57:20.749 asserts 152 152 152 0 n/a 00:57:20.749 00:57:20.749 Elapsed time = 0.905 seconds 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:57:20.749 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:57:21.026 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:57:21.026 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:57:21.026 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:57:21.026 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:57:21.027 rmmod nvme_tcp 00:57:21.027 rmmod nvme_fabrics 00:57:21.027 rmmod nvme_keyring 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 94192 ']' 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 94192 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 94192 ']' 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 94192 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94192 00:57:21.027 killing process with pid 94192 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94192' 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 94192 00:57:21.027 10:52:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 94192 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:57:21.285 00:57:21.285 real 0m3.285s 00:57:21.285 user 0m10.698s 00:57:21.285 sys 0m0.994s 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:57:21.285 10:52:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:57:21.285 ************************************ 00:57:21.285 END TEST nvmf_bdevio 00:57:21.285 ************************************ 00:57:21.543 10:52:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:57:21.543 10:52:29 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:57:21.543 10:52:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:57:21.543 10:52:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:57:21.543 10:52:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:57:21.543 ************************************ 00:57:21.543 START TEST nvmf_auth_target 00:57:21.543 ************************************ 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:57:21.543 * Looking for test storage... 00:57:21.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:57:21.543 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:57:21.544 Cannot find device "nvmf_tgt_br" 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:57:21.544 Cannot find device "nvmf_tgt_br2" 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:57:21.544 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:57:21.802 Cannot find device "nvmf_tgt_br" 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:57:21.802 Cannot find device "nvmf_tgt_br2" 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:21.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:21.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:21.802 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:57:22.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:22.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:57:22.060 00:57:22.060 --- 10.0.0.2 ping statistics --- 00:57:22.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:22.060 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:57:22.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:22.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:57:22.060 00:57:22.060 --- 10.0.0.3 ping statistics --- 00:57:22.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:22.060 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:22.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:22.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:57:22.060 00:57:22.060 --- 10.0.0.1 ping statistics --- 00:57:22.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:22.060 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=94425 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 94425 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94425 ']' 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:22.060 10:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=94470 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=107d15f10d4a79e7e75ff2e83ff2ede936dbecffe1012db4 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gcv 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 107d15f10d4a79e7e75ff2e83ff2ede936dbecffe1012db4 0 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 107d15f10d4a79e7e75ff2e83ff2ede936dbecffe1012db4 0 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=107d15f10d4a79e7e75ff2e83ff2ede936dbecffe1012db4 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gcv 00:57:22.994 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gcv 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.gcv 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0cda0317010718304aec97c8b9968ff9ad3b0d3959930cb7126cd2797bb80e70 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iVy 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0cda0317010718304aec97c8b9968ff9ad3b0d3959930cb7126cd2797bb80e70 3 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0cda0317010718304aec97c8b9968ff9ad3b0d3959930cb7126cd2797bb80e70 3 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0cda0317010718304aec97c8b9968ff9ad3b0d3959930cb7126cd2797bb80e70 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iVy 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iVy 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.iVy 00:57:23.253 10:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=89def3fdcf2ebcc82984091ef73f8674 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NOG 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 89def3fdcf2ebcc82984091ef73f8674 1 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 89def3fdcf2ebcc82984091ef73f8674 1 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=89def3fdcf2ebcc82984091ef73f8674 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NOG 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NOG 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.NOG 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:23.253 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b236e823bf02df46be664c0707a9db4ec878b6d5f8544ba6 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Yc8 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b236e823bf02df46be664c0707a9db4ec878b6d5f8544ba6 2 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b236e823bf02df46be664c0707a9db4ec878b6d5f8544ba6 2 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b236e823bf02df46be664c0707a9db4ec878b6d5f8544ba6 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Yc8 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Yc8 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Yc8 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:57:23.254 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a1487d05b76e382523410908f7e46959f334c9a2b50fbdf1 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZNv 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a1487d05b76e382523410908f7e46959f334c9a2b50fbdf1 2 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a1487d05b76e382523410908f7e46959f334c9a2b50fbdf1 2 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a1487d05b76e382523410908f7e46959f334c9a2b50fbdf1 00:57:23.513 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZNv 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZNv 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZNv 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=25a540d69ebb2fa561df1a15af8dd448 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.31O 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 25a540d69ebb2fa561df1a15af8dd448 1 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 25a540d69ebb2fa561df1a15af8dd448 1 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=25a540d69ebb2fa561df1a15af8dd448 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.31O 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.31O 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.31O 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8babddd87841710e488b32c3716500c4c95d9ac86b6d02dd736e6f598c38533 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.N4R 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8babddd87841710e488b32c3716500c4c95d9ac86b6d02dd736e6f598c38533 3 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8babddd87841710e488b32c3716500c4c95d9ac86b6d02dd736e6f598c38533 3 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8babddd87841710e488b32c3716500c4c95d9ac86b6d02dd736e6f598c38533 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.N4R 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.N4R 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.N4R 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 94425 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94425 ']' 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:57:23.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:57:23.514 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 94470 /var/tmp/host.sock 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94470 ']' 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:57:23.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:57:23.772 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gcv 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gcv 00:57:24.030 10:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gcv 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.iVy ]] 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iVy 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iVy 00:57:24.288 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iVy 00:57:24.546 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:57:24.546 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NOG 00:57:24.546 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:24.546 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.546 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:24.546 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NOG 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NOG 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Yc8 ]] 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yc8 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yc8 00:57:24.547 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yc8 00:57:24.804 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:57:24.805 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZNv 00:57:24.805 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:24.805 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:24.805 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:24.805 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZNv 00:57:24.805 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZNv 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.31O ]] 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.31O 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.31O 00:57:25.064 10:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.31O 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.N4R 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.N4R 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.N4R 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:25.322 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:25.580 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:25.838 00:57:25.838 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:25.838 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:25.838 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:26.096 { 00:57:26.096 "auth": { 00:57:26.096 "dhgroup": "null", 00:57:26.096 "digest": "sha256", 00:57:26.096 "state": "completed" 00:57:26.096 }, 00:57:26.096 "cntlid": 1, 00:57:26.096 "listen_address": { 00:57:26.096 "adrfam": "IPv4", 00:57:26.096 "traddr": "10.0.0.2", 00:57:26.096 "trsvcid": "4420", 00:57:26.096 "trtype": "TCP" 00:57:26.096 }, 00:57:26.096 "peer_address": { 00:57:26.096 "adrfam": "IPv4", 00:57:26.096 "traddr": "10.0.0.1", 00:57:26.096 "trsvcid": "38676", 00:57:26.096 "trtype": "TCP" 00:57:26.096 }, 00:57:26.096 "qid": 0, 00:57:26.096 "state": "enabled", 00:57:26.096 "thread": "nvmf_tgt_poll_group_000" 00:57:26.096 } 00:57:26.096 ]' 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:57:26.096 10:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:26.096 10:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:26.096 10:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:26.096 10:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:26.355 10:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:29.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:29.638 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:29.897 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:30.155 00:57:30.155 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:30.155 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:30.155 10:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:30.414 { 00:57:30.414 "auth": { 00:57:30.414 "dhgroup": "null", 00:57:30.414 "digest": "sha256", 00:57:30.414 "state": "completed" 00:57:30.414 }, 00:57:30.414 "cntlid": 3, 00:57:30.414 "listen_address": { 00:57:30.414 "adrfam": "IPv4", 00:57:30.414 "traddr": "10.0.0.2", 00:57:30.414 "trsvcid": "4420", 00:57:30.414 "trtype": "TCP" 00:57:30.414 }, 00:57:30.414 "peer_address": { 00:57:30.414 "adrfam": "IPv4", 00:57:30.414 "traddr": "10.0.0.1", 00:57:30.414 "trsvcid": "38696", 00:57:30.414 "trtype": "TCP" 00:57:30.414 }, 00:57:30.414 "qid": 0, 00:57:30.414 "state": "enabled", 00:57:30.414 "thread": "nvmf_tgt_poll_group_000" 00:57:30.414 } 00:57:30.414 ]' 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:30.414 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:30.672 10:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:57:31.238 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:31.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:31.238 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:31.238 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:31.238 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:31.238 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:31.238 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:31.239 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:31.239 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:31.497 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:31.755 00:57:31.756 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:31.756 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:31.756 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:32.014 { 00:57:32.014 "auth": { 00:57:32.014 "dhgroup": "null", 00:57:32.014 "digest": "sha256", 00:57:32.014 "state": "completed" 00:57:32.014 }, 00:57:32.014 "cntlid": 5, 00:57:32.014 "listen_address": { 00:57:32.014 "adrfam": "IPv4", 00:57:32.014 "traddr": "10.0.0.2", 00:57:32.014 "trsvcid": "4420", 00:57:32.014 "trtype": "TCP" 00:57:32.014 }, 00:57:32.014 "peer_address": { 00:57:32.014 "adrfam": "IPv4", 00:57:32.014 "traddr": "10.0.0.1", 00:57:32.014 "trsvcid": "38728", 00:57:32.014 "trtype": "TCP" 00:57:32.014 }, 00:57:32.014 "qid": 0, 00:57:32.014 "state": "enabled", 00:57:32.014 "thread": "nvmf_tgt_poll_group_000" 00:57:32.014 } 00:57:32.014 ]' 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:32.014 10:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:32.273 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:32.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:32.840 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:33.099 10:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:33.358 00:57:33.358 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:33.358 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:33.358 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:33.616 { 00:57:33.616 "auth": { 00:57:33.616 "dhgroup": "null", 00:57:33.616 "digest": "sha256", 00:57:33.616 "state": "completed" 00:57:33.616 }, 00:57:33.616 "cntlid": 7, 00:57:33.616 "listen_address": { 00:57:33.616 "adrfam": "IPv4", 00:57:33.616 "traddr": "10.0.0.2", 00:57:33.616 "trsvcid": "4420", 00:57:33.616 "trtype": "TCP" 00:57:33.616 }, 00:57:33.616 "peer_address": { 00:57:33.616 "adrfam": "IPv4", 00:57:33.616 "traddr": "10.0.0.1", 00:57:33.616 "trsvcid": "38764", 00:57:33.616 "trtype": "TCP" 00:57:33.616 }, 00:57:33.616 "qid": 0, 00:57:33.616 "state": "enabled", 00:57:33.616 "thread": "nvmf_tgt_poll_group_000" 00:57:33.616 } 00:57:33.616 ]' 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:33.616 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:33.875 10:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:34.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:34.442 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:34.702 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:34.960 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:34.960 { 00:57:34.960 "auth": { 00:57:34.960 "dhgroup": "ffdhe2048", 00:57:34.960 "digest": "sha256", 00:57:34.960 "state": "completed" 00:57:34.960 }, 00:57:34.960 "cntlid": 9, 00:57:34.960 "listen_address": { 00:57:34.960 "adrfam": "IPv4", 00:57:34.960 "traddr": "10.0.0.2", 00:57:34.960 "trsvcid": "4420", 00:57:34.960 "trtype": "TCP" 00:57:34.960 }, 00:57:34.960 "peer_address": { 00:57:34.960 "adrfam": "IPv4", 00:57:34.960 "traddr": "10.0.0.1", 00:57:34.960 "trsvcid": "49618", 00:57:34.960 "trtype": "TCP" 00:57:34.960 }, 00:57:34.960 "qid": 0, 00:57:34.960 "state": "enabled", 00:57:34.960 "thread": "nvmf_tgt_poll_group_000" 00:57:34.960 } 00:57:34.960 ]' 00:57:34.960 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:35.218 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:35.218 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:35.218 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:57:35.218 10:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:35.218 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:35.218 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:35.218 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:35.475 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:36.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:36.042 10:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:36.301 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:36.559 { 00:57:36.559 "auth": { 00:57:36.559 "dhgroup": "ffdhe2048", 00:57:36.559 "digest": "sha256", 00:57:36.559 "state": "completed" 00:57:36.559 }, 00:57:36.559 "cntlid": 11, 00:57:36.559 "listen_address": { 00:57:36.559 "adrfam": "IPv4", 00:57:36.559 "traddr": "10.0.0.2", 00:57:36.559 "trsvcid": "4420", 00:57:36.559 "trtype": "TCP" 00:57:36.559 }, 00:57:36.559 "peer_address": { 00:57:36.559 "adrfam": "IPv4", 00:57:36.559 "traddr": "10.0.0.1", 00:57:36.559 "trsvcid": "49662", 00:57:36.559 "trtype": "TCP" 00:57:36.559 }, 00:57:36.559 "qid": 0, 00:57:36.559 "state": "enabled", 00:57:36.559 "thread": "nvmf_tgt_poll_group_000" 00:57:36.559 } 00:57:36.559 ]' 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:36.559 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:36.818 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:57:36.818 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:36.818 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:36.818 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:36.818 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:36.818 10:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:37.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:37.385 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:37.644 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:37.901 00:57:37.901 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:37.901 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:37.901 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:38.160 { 00:57:38.160 "auth": { 00:57:38.160 "dhgroup": "ffdhe2048", 00:57:38.160 "digest": "sha256", 00:57:38.160 "state": "completed" 00:57:38.160 }, 00:57:38.160 "cntlid": 13, 00:57:38.160 "listen_address": { 00:57:38.160 "adrfam": "IPv4", 00:57:38.160 "traddr": "10.0.0.2", 00:57:38.160 "trsvcid": "4420", 00:57:38.160 "trtype": "TCP" 00:57:38.160 }, 00:57:38.160 "peer_address": { 00:57:38.160 "adrfam": "IPv4", 00:57:38.160 "traddr": "10.0.0.1", 00:57:38.160 "trsvcid": "49684", 00:57:38.160 "trtype": "TCP" 00:57:38.160 }, 00:57:38.160 "qid": 0, 00:57:38.160 "state": "enabled", 00:57:38.160 "thread": "nvmf_tgt_poll_group_000" 00:57:38.160 } 00:57:38.160 ]' 00:57:38.160 10:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:38.160 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:38.160 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:38.160 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:57:38.160 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:38.418 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:38.418 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:38.418 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:38.419 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:38.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:38.984 10:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:39.241 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:39.511 00:57:39.511 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:39.511 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:39.511 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:39.770 { 00:57:39.770 "auth": { 00:57:39.770 "dhgroup": "ffdhe2048", 00:57:39.770 "digest": "sha256", 00:57:39.770 "state": "completed" 00:57:39.770 }, 00:57:39.770 "cntlid": 15, 00:57:39.770 "listen_address": { 00:57:39.770 "adrfam": "IPv4", 00:57:39.770 "traddr": "10.0.0.2", 00:57:39.770 "trsvcid": "4420", 00:57:39.770 "trtype": "TCP" 00:57:39.770 }, 00:57:39.770 "peer_address": { 00:57:39.770 "adrfam": "IPv4", 00:57:39.770 "traddr": "10.0.0.1", 00:57:39.770 "trsvcid": "49706", 00:57:39.770 "trtype": "TCP" 00:57:39.770 }, 00:57:39.770 "qid": 0, 00:57:39.770 "state": "enabled", 00:57:39.770 "thread": "nvmf_tgt_poll_group_000" 00:57:39.770 } 00:57:39.770 ]' 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:39.770 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:40.028 10:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:40.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:40.595 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:40.854 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:41.113 00:57:41.113 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:41.113 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:41.113 10:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:41.372 { 00:57:41.372 "auth": { 00:57:41.372 "dhgroup": "ffdhe3072", 00:57:41.372 "digest": "sha256", 00:57:41.372 "state": "completed" 00:57:41.372 }, 00:57:41.372 "cntlid": 17, 00:57:41.372 "listen_address": { 00:57:41.372 "adrfam": "IPv4", 00:57:41.372 "traddr": "10.0.0.2", 00:57:41.372 "trsvcid": "4420", 00:57:41.372 "trtype": "TCP" 00:57:41.372 }, 00:57:41.372 "peer_address": { 00:57:41.372 "adrfam": "IPv4", 00:57:41.372 "traddr": "10.0.0.1", 00:57:41.372 "trsvcid": "49750", 00:57:41.372 "trtype": "TCP" 00:57:41.372 }, 00:57:41.372 "qid": 0, 00:57:41.372 "state": "enabled", 00:57:41.372 "thread": "nvmf_tgt_poll_group_000" 00:57:41.372 } 00:57:41.372 ]' 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:41.372 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:41.630 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:42.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:42.198 10:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:42.456 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:42.457 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:42.715 00:57:42.715 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:42.715 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:42.715 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:42.974 { 00:57:42.974 "auth": { 00:57:42.974 "dhgroup": "ffdhe3072", 00:57:42.974 "digest": "sha256", 00:57:42.974 "state": "completed" 00:57:42.974 }, 00:57:42.974 "cntlid": 19, 00:57:42.974 "listen_address": { 00:57:42.974 "adrfam": "IPv4", 00:57:42.974 "traddr": "10.0.0.2", 00:57:42.974 "trsvcid": "4420", 00:57:42.974 "trtype": "TCP" 00:57:42.974 }, 00:57:42.974 "peer_address": { 00:57:42.974 "adrfam": "IPv4", 00:57:42.974 "traddr": "10.0.0.1", 00:57:42.974 "trsvcid": "49794", 00:57:42.974 "trtype": "TCP" 00:57:42.974 }, 00:57:42.974 "qid": 0, 00:57:42.974 "state": "enabled", 00:57:42.974 "thread": "nvmf_tgt_poll_group_000" 00:57:42.974 } 00:57:42.974 ]' 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:42.974 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:42.975 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:43.233 10:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:43.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:43.802 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:44.061 00:57:44.061 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:44.061 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:44.061 10:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:44.319 { 00:57:44.319 "auth": { 00:57:44.319 "dhgroup": "ffdhe3072", 00:57:44.319 "digest": "sha256", 00:57:44.319 "state": "completed" 00:57:44.319 }, 00:57:44.319 "cntlid": 21, 00:57:44.319 "listen_address": { 00:57:44.319 "adrfam": "IPv4", 00:57:44.319 "traddr": "10.0.0.2", 00:57:44.319 "trsvcid": "4420", 00:57:44.319 "trtype": "TCP" 00:57:44.319 }, 00:57:44.319 "peer_address": { 00:57:44.319 "adrfam": "IPv4", 00:57:44.319 "traddr": "10.0.0.1", 00:57:44.319 "trsvcid": "49826", 00:57:44.319 "trtype": "TCP" 00:57:44.319 }, 00:57:44.319 "qid": 0, 00:57:44.319 "state": "enabled", 00:57:44.319 "thread": "nvmf_tgt_poll_group_000" 00:57:44.319 } 00:57:44.319 ]' 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:44.319 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:57:44.578 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:44.578 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:44.578 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:44.578 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:44.578 10:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:45.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:45.195 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:45.453 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:45.710 00:57:45.710 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:45.710 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:45.710 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:45.968 { 00:57:45.968 "auth": { 00:57:45.968 "dhgroup": "ffdhe3072", 00:57:45.968 "digest": "sha256", 00:57:45.968 "state": "completed" 00:57:45.968 }, 00:57:45.968 "cntlid": 23, 00:57:45.968 "listen_address": { 00:57:45.968 "adrfam": "IPv4", 00:57:45.968 "traddr": "10.0.0.2", 00:57:45.968 "trsvcid": "4420", 00:57:45.968 "trtype": "TCP" 00:57:45.968 }, 00:57:45.968 "peer_address": { 00:57:45.968 "adrfam": "IPv4", 00:57:45.968 "traddr": "10.0.0.1", 00:57:45.968 "trsvcid": "59426", 00:57:45.968 "trtype": "TCP" 00:57:45.968 }, 00:57:45.968 "qid": 0, 00:57:45.968 "state": "enabled", 00:57:45.968 "thread": "nvmf_tgt_poll_group_000" 00:57:45.968 } 00:57:45.968 ]' 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:45.968 10:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:46.226 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:46.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:46.794 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:47.052 10:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:47.311 00:57:47.311 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:47.311 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:47.311 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:47.569 { 00:57:47.569 "auth": { 00:57:47.569 "dhgroup": "ffdhe4096", 00:57:47.569 "digest": "sha256", 00:57:47.569 "state": "completed" 00:57:47.569 }, 00:57:47.569 "cntlid": 25, 00:57:47.569 "listen_address": { 00:57:47.569 "adrfam": "IPv4", 00:57:47.569 "traddr": "10.0.0.2", 00:57:47.569 "trsvcid": "4420", 00:57:47.569 "trtype": "TCP" 00:57:47.569 }, 00:57:47.569 "peer_address": { 00:57:47.569 "adrfam": "IPv4", 00:57:47.569 "traddr": "10.0.0.1", 00:57:47.569 "trsvcid": "59450", 00:57:47.569 "trtype": "TCP" 00:57:47.569 }, 00:57:47.569 "qid": 0, 00:57:47.569 "state": "enabled", 00:57:47.569 "thread": "nvmf_tgt_poll_group_000" 00:57:47.569 } 00:57:47.569 ]' 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:57:47.569 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:47.828 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:47.828 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:47.828 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:47.828 10:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:48.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:48.397 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:48.656 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:48.916 00:57:48.916 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:48.916 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:48.916 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:49.175 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:49.175 10:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:49.175 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:49.175 10:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:49.175 { 00:57:49.175 "auth": { 00:57:49.175 "dhgroup": "ffdhe4096", 00:57:49.175 "digest": "sha256", 00:57:49.175 "state": "completed" 00:57:49.175 }, 00:57:49.175 "cntlid": 27, 00:57:49.175 "listen_address": { 00:57:49.175 "adrfam": "IPv4", 00:57:49.175 "traddr": "10.0.0.2", 00:57:49.175 "trsvcid": "4420", 00:57:49.175 "trtype": "TCP" 00:57:49.175 }, 00:57:49.175 "peer_address": { 00:57:49.175 "adrfam": "IPv4", 00:57:49.175 "traddr": "10.0.0.1", 00:57:49.175 "trsvcid": "59482", 00:57:49.175 "trtype": "TCP" 00:57:49.175 }, 00:57:49.175 "qid": 0, 00:57:49.175 "state": "enabled", 00:57:49.175 "thread": "nvmf_tgt_poll_group_000" 00:57:49.175 } 00:57:49.175 ]' 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:57:49.175 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:49.434 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:49.434 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:49.434 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:49.434 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:50.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:50.003 10:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:50.263 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:50.522 00:57:50.522 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:50.522 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:50.522 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:50.781 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:50.781 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:50.781 10:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:50.781 10:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:50.781 10:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:50.781 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:50.781 { 00:57:50.781 "auth": { 00:57:50.782 "dhgroup": "ffdhe4096", 00:57:50.782 "digest": "sha256", 00:57:50.782 "state": "completed" 00:57:50.782 }, 00:57:50.782 "cntlid": 29, 00:57:50.782 "listen_address": { 00:57:50.782 "adrfam": "IPv4", 00:57:50.782 "traddr": "10.0.0.2", 00:57:50.782 "trsvcid": "4420", 00:57:50.782 "trtype": "TCP" 00:57:50.782 }, 00:57:50.782 "peer_address": { 00:57:50.782 "adrfam": "IPv4", 00:57:50.782 "traddr": "10.0.0.1", 00:57:50.782 "trsvcid": "59506", 00:57:50.782 "trtype": "TCP" 00:57:50.782 }, 00:57:50.782 "qid": 0, 00:57:50.782 "state": "enabled", 00:57:50.782 "thread": "nvmf_tgt_poll_group_000" 00:57:50.782 } 00:57:50.782 ]' 00:57:50.782 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:50.782 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:50.782 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:51.041 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:57:51.041 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:51.041 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:51.041 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:51.041 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:51.041 10:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:51.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:51.611 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:51.870 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:51.871 10:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:52.130 00:57:52.130 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:52.130 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:52.130 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:52.388 { 00:57:52.388 "auth": { 00:57:52.388 "dhgroup": "ffdhe4096", 00:57:52.388 "digest": "sha256", 00:57:52.388 "state": "completed" 00:57:52.388 }, 00:57:52.388 "cntlid": 31, 00:57:52.388 "listen_address": { 00:57:52.388 "adrfam": "IPv4", 00:57:52.388 "traddr": "10.0.0.2", 00:57:52.388 "trsvcid": "4420", 00:57:52.388 "trtype": "TCP" 00:57:52.388 }, 00:57:52.388 "peer_address": { 00:57:52.388 "adrfam": "IPv4", 00:57:52.388 "traddr": "10.0.0.1", 00:57:52.388 "trsvcid": "59542", 00:57:52.388 "trtype": "TCP" 00:57:52.388 }, 00:57:52.388 "qid": 0, 00:57:52.388 "state": "enabled", 00:57:52.388 "thread": "nvmf_tgt_poll_group_000" 00:57:52.388 } 00:57:52.388 ]' 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:52.388 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:52.646 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:57:52.646 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:52.646 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:52.646 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:52.646 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:52.646 10:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:53.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:53.267 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:53.582 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:53.842 00:57:53.842 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:53.842 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:53.842 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:54.101 { 00:57:54.101 "auth": { 00:57:54.101 "dhgroup": "ffdhe6144", 00:57:54.101 "digest": "sha256", 00:57:54.101 "state": "completed" 00:57:54.101 }, 00:57:54.101 "cntlid": 33, 00:57:54.101 "listen_address": { 00:57:54.101 "adrfam": "IPv4", 00:57:54.101 "traddr": "10.0.0.2", 00:57:54.101 "trsvcid": "4420", 00:57:54.101 "trtype": "TCP" 00:57:54.101 }, 00:57:54.101 "peer_address": { 00:57:54.101 "adrfam": "IPv4", 00:57:54.101 "traddr": "10.0.0.1", 00:57:54.101 "trsvcid": "59564", 00:57:54.101 "trtype": "TCP" 00:57:54.101 }, 00:57:54.101 "qid": 0, 00:57:54.101 "state": "enabled", 00:57:54.101 "thread": "nvmf_tgt_poll_group_000" 00:57:54.101 } 00:57:54.101 ]' 00:57:54.101 10:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:54.101 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:54.101 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:54.358 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:57:54.358 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:54.358 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:54.358 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:54.358 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:54.358 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:54.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:54.926 10:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:55.185 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:57:55.185 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:55.186 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:55.751 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:55.751 { 00:57:55.751 "auth": { 00:57:55.751 "dhgroup": "ffdhe6144", 00:57:55.751 "digest": "sha256", 00:57:55.751 "state": "completed" 00:57:55.751 }, 00:57:55.751 "cntlid": 35, 00:57:55.751 "listen_address": { 00:57:55.751 "adrfam": "IPv4", 00:57:55.751 "traddr": "10.0.0.2", 00:57:55.751 "trsvcid": "4420", 00:57:55.751 "trtype": "TCP" 00:57:55.751 }, 00:57:55.751 "peer_address": { 00:57:55.751 "adrfam": "IPv4", 00:57:55.751 "traddr": "10.0.0.1", 00:57:55.751 "trsvcid": "50316", 00:57:55.751 "trtype": "TCP" 00:57:55.751 }, 00:57:55.751 "qid": 0, 00:57:55.751 "state": "enabled", 00:57:55.751 "thread": "nvmf_tgt_poll_group_000" 00:57:55.751 } 00:57:55.751 ]' 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:55.751 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:56.009 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:57:56.009 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:56.009 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:56.009 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:56.009 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:56.266 10:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:56.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:56.832 10:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:57.398 00:57:57.398 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:57.398 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:57.398 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:57.398 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:57.656 { 00:57:57.656 "auth": { 00:57:57.656 "dhgroup": "ffdhe6144", 00:57:57.656 "digest": "sha256", 00:57:57.656 "state": "completed" 00:57:57.656 }, 00:57:57.656 "cntlid": 37, 00:57:57.656 "listen_address": { 00:57:57.656 "adrfam": "IPv4", 00:57:57.656 "traddr": "10.0.0.2", 00:57:57.656 "trsvcid": "4420", 00:57:57.656 "trtype": "TCP" 00:57:57.656 }, 00:57:57.656 "peer_address": { 00:57:57.656 "adrfam": "IPv4", 00:57:57.656 "traddr": "10.0.0.1", 00:57:57.656 "trsvcid": "50332", 00:57:57.656 "trtype": "TCP" 00:57:57.656 }, 00:57:57.656 "qid": 0, 00:57:57.656 "state": "enabled", 00:57:57.656 "thread": "nvmf_tgt_poll_group_000" 00:57:57.656 } 00:57:57.656 ]' 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:57.656 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:57.914 10:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:57:58.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:58.480 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:58.738 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:57:58.996 00:57:58.996 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:57:58.996 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:57:58.996 10:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:57:59.254 { 00:57:59.254 "auth": { 00:57:59.254 "dhgroup": "ffdhe6144", 00:57:59.254 "digest": "sha256", 00:57:59.254 "state": "completed" 00:57:59.254 }, 00:57:59.254 "cntlid": 39, 00:57:59.254 "listen_address": { 00:57:59.254 "adrfam": "IPv4", 00:57:59.254 "traddr": "10.0.0.2", 00:57:59.254 "trsvcid": "4420", 00:57:59.254 "trtype": "TCP" 00:57:59.254 }, 00:57:59.254 "peer_address": { 00:57:59.254 "adrfam": "IPv4", 00:57:59.254 "traddr": "10.0.0.1", 00:57:59.254 "trsvcid": "50372", 00:57:59.254 "trtype": "TCP" 00:57:59.254 }, 00:57:59.254 "qid": 0, 00:57:59.254 "state": "enabled", 00:57:59.254 "thread": "nvmf_tgt_poll_group_000" 00:57:59.254 } 00:57:59.254 ]' 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:57:59.254 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:57:59.512 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:00.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:00.078 10:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:00.336 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:00.903 00:58:00.903 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:00.903 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:00.903 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:01.162 { 00:58:01.162 "auth": { 00:58:01.162 "dhgroup": "ffdhe8192", 00:58:01.162 "digest": "sha256", 00:58:01.162 "state": "completed" 00:58:01.162 }, 00:58:01.162 "cntlid": 41, 00:58:01.162 "listen_address": { 00:58:01.162 "adrfam": "IPv4", 00:58:01.162 "traddr": "10.0.0.2", 00:58:01.162 "trsvcid": "4420", 00:58:01.162 "trtype": "TCP" 00:58:01.162 }, 00:58:01.162 "peer_address": { 00:58:01.162 "adrfam": "IPv4", 00:58:01.162 "traddr": "10.0.0.1", 00:58:01.162 "trsvcid": "50402", 00:58:01.162 "trtype": "TCP" 00:58:01.162 }, 00:58:01.162 "qid": 0, 00:58:01.162 "state": "enabled", 00:58:01.162 "thread": "nvmf_tgt_poll_group_000" 00:58:01.162 } 00:58:01.162 ]' 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:01.162 10:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:01.162 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:01.162 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:01.162 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:01.421 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:01.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:01.988 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:02.253 10:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:02.819 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:02.819 { 00:58:02.819 "auth": { 00:58:02.819 "dhgroup": "ffdhe8192", 00:58:02.819 "digest": "sha256", 00:58:02.819 "state": "completed" 00:58:02.819 }, 00:58:02.819 "cntlid": 43, 00:58:02.819 "listen_address": { 00:58:02.819 "adrfam": "IPv4", 00:58:02.819 "traddr": "10.0.0.2", 00:58:02.819 "trsvcid": "4420", 00:58:02.819 "trtype": "TCP" 00:58:02.819 }, 00:58:02.819 "peer_address": { 00:58:02.819 "adrfam": "IPv4", 00:58:02.819 "traddr": "10.0.0.1", 00:58:02.819 "trsvcid": "50416", 00:58:02.819 "trtype": "TCP" 00:58:02.819 }, 00:58:02.819 "qid": 0, 00:58:02.819 "state": "enabled", 00:58:02.819 "thread": "nvmf_tgt_poll_group_000" 00:58:02.819 } 00:58:02.819 ]' 00:58:02.819 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:03.077 10:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:03.334 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:03.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:03.901 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:04.160 10:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:04.727 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:04.727 { 00:58:04.727 "auth": { 00:58:04.727 "dhgroup": "ffdhe8192", 00:58:04.727 "digest": "sha256", 00:58:04.727 "state": "completed" 00:58:04.727 }, 00:58:04.727 "cntlid": 45, 00:58:04.727 "listen_address": { 00:58:04.727 "adrfam": "IPv4", 00:58:04.727 "traddr": "10.0.0.2", 00:58:04.727 "trsvcid": "4420", 00:58:04.727 "trtype": "TCP" 00:58:04.727 }, 00:58:04.727 "peer_address": { 00:58:04.727 "adrfam": "IPv4", 00:58:04.727 "traddr": "10.0.0.1", 00:58:04.727 "trsvcid": "50440", 00:58:04.727 "trtype": "TCP" 00:58:04.727 }, 00:58:04.727 "qid": 0, 00:58:04.727 "state": "enabled", 00:58:04.727 "thread": "nvmf_tgt_poll_group_000" 00:58:04.727 } 00:58:04.727 ]' 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:04.727 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:04.985 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:04.985 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:04.985 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:04.985 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:04.985 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:05.242 10:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:05.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:05.807 10:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:06.374 00:58:06.374 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:06.374 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:06.374 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:06.634 { 00:58:06.634 "auth": { 00:58:06.634 "dhgroup": "ffdhe8192", 00:58:06.634 "digest": "sha256", 00:58:06.634 "state": "completed" 00:58:06.634 }, 00:58:06.634 "cntlid": 47, 00:58:06.634 "listen_address": { 00:58:06.634 "adrfam": "IPv4", 00:58:06.634 "traddr": "10.0.0.2", 00:58:06.634 "trsvcid": "4420", 00:58:06.634 "trtype": "TCP" 00:58:06.634 }, 00:58:06.634 "peer_address": { 00:58:06.634 "adrfam": "IPv4", 00:58:06.634 "traddr": "10.0.0.1", 00:58:06.634 "trsvcid": "49472", 00:58:06.634 "trtype": "TCP" 00:58:06.634 }, 00:58:06.634 "qid": 0, 00:58:06.634 "state": "enabled", 00:58:06.634 "thread": "nvmf_tgt_poll_group_000" 00:58:06.634 } 00:58:06.634 ]' 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:06.634 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:06.893 10:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:07.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:07.462 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:07.723 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:07.724 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:07.724 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:07.985 00:58:07.985 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:07.985 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:07.985 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:08.248 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:08.248 10:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:08.248 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:08.248 10:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:08.248 { 00:58:08.248 "auth": { 00:58:08.248 "dhgroup": "null", 00:58:08.248 "digest": "sha384", 00:58:08.248 "state": "completed" 00:58:08.248 }, 00:58:08.248 "cntlid": 49, 00:58:08.248 "listen_address": { 00:58:08.248 "adrfam": "IPv4", 00:58:08.248 "traddr": "10.0.0.2", 00:58:08.248 "trsvcid": "4420", 00:58:08.248 "trtype": "TCP" 00:58:08.248 }, 00:58:08.248 "peer_address": { 00:58:08.248 "adrfam": "IPv4", 00:58:08.248 "traddr": "10.0.0.1", 00:58:08.248 "trsvcid": "49494", 00:58:08.248 "trtype": "TCP" 00:58:08.248 }, 00:58:08.248 "qid": 0, 00:58:08.248 "state": "enabled", 00:58:08.248 "thread": "nvmf_tgt_poll_group_000" 00:58:08.248 } 00:58:08.248 ]' 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:08.248 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:08.506 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:09.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:09.071 10:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:09.327 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:09.585 00:58:09.585 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:09.585 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:09.585 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:09.842 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:09.842 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:09.842 10:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:09.842 10:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:09.842 10:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:09.842 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:09.842 { 00:58:09.842 "auth": { 00:58:09.842 "dhgroup": "null", 00:58:09.842 "digest": "sha384", 00:58:09.842 "state": "completed" 00:58:09.842 }, 00:58:09.842 "cntlid": 51, 00:58:09.842 "listen_address": { 00:58:09.842 "adrfam": "IPv4", 00:58:09.842 "traddr": "10.0.0.2", 00:58:09.842 "trsvcid": "4420", 00:58:09.842 "trtype": "TCP" 00:58:09.842 }, 00:58:09.842 "peer_address": { 00:58:09.842 "adrfam": "IPv4", 00:58:09.842 "traddr": "10.0.0.1", 00:58:09.843 "trsvcid": "49522", 00:58:09.843 "trtype": "TCP" 00:58:09.843 }, 00:58:09.843 "qid": 0, 00:58:09.843 "state": "enabled", 00:58:09.843 "thread": "nvmf_tgt_poll_group_000" 00:58:09.843 } 00:58:09.843 ]' 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:09.843 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:10.102 10:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:10.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:10.718 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:10.976 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:11.234 00:58:11.234 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:11.234 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:11.234 10:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:11.234 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:11.492 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:11.493 { 00:58:11.493 "auth": { 00:58:11.493 "dhgroup": "null", 00:58:11.493 "digest": "sha384", 00:58:11.493 "state": "completed" 00:58:11.493 }, 00:58:11.493 "cntlid": 53, 00:58:11.493 "listen_address": { 00:58:11.493 "adrfam": "IPv4", 00:58:11.493 "traddr": "10.0.0.2", 00:58:11.493 "trsvcid": "4420", 00:58:11.493 "trtype": "TCP" 00:58:11.493 }, 00:58:11.493 "peer_address": { 00:58:11.493 "adrfam": "IPv4", 00:58:11.493 "traddr": "10.0.0.1", 00:58:11.493 "trsvcid": "49556", 00:58:11.493 "trtype": "TCP" 00:58:11.493 }, 00:58:11.493 "qid": 0, 00:58:11.493 "state": "enabled", 00:58:11.493 "thread": "nvmf_tgt_poll_group_000" 00:58:11.493 } 00:58:11.493 ]' 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:11.493 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:11.751 10:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:12.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:12.319 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:12.577 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:12.577 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:12.577 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:12.577 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:12.577 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:12.577 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:12.577 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:12.836 { 00:58:12.836 "auth": { 00:58:12.836 "dhgroup": "null", 00:58:12.836 "digest": "sha384", 00:58:12.836 "state": "completed" 00:58:12.836 }, 00:58:12.836 "cntlid": 55, 00:58:12.836 "listen_address": { 00:58:12.836 "adrfam": "IPv4", 00:58:12.836 "traddr": "10.0.0.2", 00:58:12.836 "trsvcid": "4420", 00:58:12.836 "trtype": "TCP" 00:58:12.836 }, 00:58:12.836 "peer_address": { 00:58:12.836 "adrfam": "IPv4", 00:58:12.836 "traddr": "10.0.0.1", 00:58:12.836 "trsvcid": "49598", 00:58:12.836 "trtype": "TCP" 00:58:12.836 }, 00:58:12.836 "qid": 0, 00:58:12.836 "state": "enabled", 00:58:12.836 "thread": "nvmf_tgt_poll_group_000" 00:58:12.836 } 00:58:12.836 ]' 00:58:12.836 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:13.096 10:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:13.357 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:13.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:13.922 10:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:14.180 00:58:14.180 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:14.180 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:14.180 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:14.437 { 00:58:14.437 "auth": { 00:58:14.437 "dhgroup": "ffdhe2048", 00:58:14.437 "digest": "sha384", 00:58:14.437 "state": "completed" 00:58:14.437 }, 00:58:14.437 "cntlid": 57, 00:58:14.437 "listen_address": { 00:58:14.437 "adrfam": "IPv4", 00:58:14.437 "traddr": "10.0.0.2", 00:58:14.437 "trsvcid": "4420", 00:58:14.437 "trtype": "TCP" 00:58:14.437 }, 00:58:14.437 "peer_address": { 00:58:14.437 "adrfam": "IPv4", 00:58:14.437 "traddr": "10.0.0.1", 00:58:14.437 "trsvcid": "49612", 00:58:14.437 "trtype": "TCP" 00:58:14.437 }, 00:58:14.437 "qid": 0, 00:58:14.437 "state": "enabled", 00:58:14.437 "thread": "nvmf_tgt_poll_group_000" 00:58:14.437 } 00:58:14.437 ]' 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:14.437 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:14.694 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:14.694 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:14.694 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:14.694 10:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:15.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:15.261 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:15.519 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:15.777 00:58:15.777 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:15.777 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:15.777 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:16.035 { 00:58:16.035 "auth": { 00:58:16.035 "dhgroup": "ffdhe2048", 00:58:16.035 "digest": "sha384", 00:58:16.035 "state": "completed" 00:58:16.035 }, 00:58:16.035 "cntlid": 59, 00:58:16.035 "listen_address": { 00:58:16.035 "adrfam": "IPv4", 00:58:16.035 "traddr": "10.0.0.2", 00:58:16.035 "trsvcid": "4420", 00:58:16.035 "trtype": "TCP" 00:58:16.035 }, 00:58:16.035 "peer_address": { 00:58:16.035 "adrfam": "IPv4", 00:58:16.035 "traddr": "10.0.0.1", 00:58:16.035 "trsvcid": "57110", 00:58:16.035 "trtype": "TCP" 00:58:16.035 }, 00:58:16.035 "qid": 0, 00:58:16.035 "state": "enabled", 00:58:16.035 "thread": "nvmf_tgt_poll_group_000" 00:58:16.035 } 00:58:16.035 ]' 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:16.035 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:16.297 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:16.297 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:16.297 10:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:16.297 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:16.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:16.864 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:17.121 10:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:17.378 00:58:17.378 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:17.378 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:17.378 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:17.637 { 00:58:17.637 "auth": { 00:58:17.637 "dhgroup": "ffdhe2048", 00:58:17.637 "digest": "sha384", 00:58:17.637 "state": "completed" 00:58:17.637 }, 00:58:17.637 "cntlid": 61, 00:58:17.637 "listen_address": { 00:58:17.637 "adrfam": "IPv4", 00:58:17.637 "traddr": "10.0.0.2", 00:58:17.637 "trsvcid": "4420", 00:58:17.637 "trtype": "TCP" 00:58:17.637 }, 00:58:17.637 "peer_address": { 00:58:17.637 "adrfam": "IPv4", 00:58:17.637 "traddr": "10.0.0.1", 00:58:17.637 "trsvcid": "57148", 00:58:17.637 "trtype": "TCP" 00:58:17.637 }, 00:58:17.637 "qid": 0, 00:58:17.637 "state": "enabled", 00:58:17.637 "thread": "nvmf_tgt_poll_group_000" 00:58:17.637 } 00:58:17.637 ]' 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:17.637 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:17.895 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:17.895 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:17.895 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:17.895 10:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:18.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:18.459 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:18.717 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:18.974 00:58:18.974 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:18.974 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:18.974 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:19.231 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:19.231 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:19.231 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:19.231 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:19.231 10:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:19.231 10:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:19.231 { 00:58:19.231 "auth": { 00:58:19.231 "dhgroup": "ffdhe2048", 00:58:19.231 "digest": "sha384", 00:58:19.231 "state": "completed" 00:58:19.231 }, 00:58:19.231 "cntlid": 63, 00:58:19.231 "listen_address": { 00:58:19.231 "adrfam": "IPv4", 00:58:19.231 "traddr": "10.0.0.2", 00:58:19.231 "trsvcid": "4420", 00:58:19.231 "trtype": "TCP" 00:58:19.231 }, 00:58:19.231 "peer_address": { 00:58:19.231 "adrfam": "IPv4", 00:58:19.231 "traddr": "10.0.0.1", 00:58:19.231 "trsvcid": "57184", 00:58:19.231 "trtype": "TCP" 00:58:19.231 }, 00:58:19.231 "qid": 0, 00:58:19.231 "state": "enabled", 00:58:19.231 "thread": "nvmf_tgt_poll_group_000" 00:58:19.231 } 00:58:19.231 ]' 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:19.231 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:19.488 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:20.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:20.101 10:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:20.393 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:20.650 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:20.650 10:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:20.908 { 00:58:20.908 "auth": { 00:58:20.908 "dhgroup": "ffdhe3072", 00:58:20.908 "digest": "sha384", 00:58:20.908 "state": "completed" 00:58:20.908 }, 00:58:20.908 "cntlid": 65, 00:58:20.908 "listen_address": { 00:58:20.908 "adrfam": "IPv4", 00:58:20.908 "traddr": "10.0.0.2", 00:58:20.908 "trsvcid": "4420", 00:58:20.908 "trtype": "TCP" 00:58:20.908 }, 00:58:20.908 "peer_address": { 00:58:20.908 "adrfam": "IPv4", 00:58:20.908 "traddr": "10.0.0.1", 00:58:20.908 "trsvcid": "57206", 00:58:20.908 "trtype": "TCP" 00:58:20.908 }, 00:58:20.908 "qid": 0, 00:58:20.908 "state": "enabled", 00:58:20.908 "thread": "nvmf_tgt_poll_group_000" 00:58:20.908 } 00:58:20.908 ]' 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:58:20.908 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:20.909 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:20.909 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:20.909 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:21.169 10:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:21.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:21.735 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:21.993 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:22.252 00:58:22.252 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:22.252 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:22.252 10:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:22.252 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:22.252 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:22.252 10:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:22.252 10:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:22.510 10:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:22.511 { 00:58:22.511 "auth": { 00:58:22.511 "dhgroup": "ffdhe3072", 00:58:22.511 "digest": "sha384", 00:58:22.511 "state": "completed" 00:58:22.511 }, 00:58:22.511 "cntlid": 67, 00:58:22.511 "listen_address": { 00:58:22.511 "adrfam": "IPv4", 00:58:22.511 "traddr": "10.0.0.2", 00:58:22.511 "trsvcid": "4420", 00:58:22.511 "trtype": "TCP" 00:58:22.511 }, 00:58:22.511 "peer_address": { 00:58:22.511 "adrfam": "IPv4", 00:58:22.511 "traddr": "10.0.0.1", 00:58:22.511 "trsvcid": "57240", 00:58:22.511 "trtype": "TCP" 00:58:22.511 }, 00:58:22.511 "qid": 0, 00:58:22.511 "state": "enabled", 00:58:22.511 "thread": "nvmf_tgt_poll_group_000" 00:58:22.511 } 00:58:22.511 ]' 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:22.511 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:22.770 10:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:23.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:23.337 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:23.338 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:23.597 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:23.857 { 00:58:23.857 "auth": { 00:58:23.857 "dhgroup": "ffdhe3072", 00:58:23.857 "digest": "sha384", 00:58:23.857 "state": "completed" 00:58:23.857 }, 00:58:23.857 "cntlid": 69, 00:58:23.857 "listen_address": { 00:58:23.857 "adrfam": "IPv4", 00:58:23.857 "traddr": "10.0.0.2", 00:58:23.857 "trsvcid": "4420", 00:58:23.857 "trtype": "TCP" 00:58:23.857 }, 00:58:23.857 "peer_address": { 00:58:23.857 "adrfam": "IPv4", 00:58:23.857 "traddr": "10.0.0.1", 00:58:23.857 "trsvcid": "57264", 00:58:23.857 "trtype": "TCP" 00:58:23.857 }, 00:58:23.857 "qid": 0, 00:58:23.857 "state": "enabled", 00:58:23.857 "thread": "nvmf_tgt_poll_group_000" 00:58:23.857 } 00:58:23.857 ]' 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:23.857 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:24.116 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:58:24.116 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:24.116 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:24.116 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:24.116 10:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:24.375 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:24.941 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:24.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:24.941 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:24.941 10:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:24.942 10:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:25.199 00:58:25.199 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:25.199 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:25.199 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:25.456 { 00:58:25.456 "auth": { 00:58:25.456 "dhgroup": "ffdhe3072", 00:58:25.456 "digest": "sha384", 00:58:25.456 "state": "completed" 00:58:25.456 }, 00:58:25.456 "cntlid": 71, 00:58:25.456 "listen_address": { 00:58:25.456 "adrfam": "IPv4", 00:58:25.456 "traddr": "10.0.0.2", 00:58:25.456 "trsvcid": "4420", 00:58:25.456 "trtype": "TCP" 00:58:25.456 }, 00:58:25.456 "peer_address": { 00:58:25.456 "adrfam": "IPv4", 00:58:25.456 "traddr": "10.0.0.1", 00:58:25.456 "trsvcid": "41626", 00:58:25.456 "trtype": "TCP" 00:58:25.456 }, 00:58:25.456 "qid": 0, 00:58:25.456 "state": "enabled", 00:58:25.456 "thread": "nvmf_tgt_poll_group_000" 00:58:25.456 } 00:58:25.456 ]' 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:25.456 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:25.714 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:58:25.714 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:25.714 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:25.714 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:25.714 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:25.971 10:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:26.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:26.535 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:26.536 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:26.536 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:26.536 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:26.536 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:26.794 00:58:26.794 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:26.794 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:26.794 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:27.052 { 00:58:27.052 "auth": { 00:58:27.052 "dhgroup": "ffdhe4096", 00:58:27.052 "digest": "sha384", 00:58:27.052 "state": "completed" 00:58:27.052 }, 00:58:27.052 "cntlid": 73, 00:58:27.052 "listen_address": { 00:58:27.052 "adrfam": "IPv4", 00:58:27.052 "traddr": "10.0.0.2", 00:58:27.052 "trsvcid": "4420", 00:58:27.052 "trtype": "TCP" 00:58:27.052 }, 00:58:27.052 "peer_address": { 00:58:27.052 "adrfam": "IPv4", 00:58:27.052 "traddr": "10.0.0.1", 00:58:27.052 "trsvcid": "41654", 00:58:27.052 "trtype": "TCP" 00:58:27.052 }, 00:58:27.052 "qid": 0, 00:58:27.052 "state": "enabled", 00:58:27.052 "thread": "nvmf_tgt_poll_group_000" 00:58:27.052 } 00:58:27.052 ]' 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:27.052 10:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:27.310 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:58:27.310 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:27.310 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:27.310 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:27.310 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:27.568 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:28.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:28.134 10:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:28.134 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:28.394 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:28.653 { 00:58:28.653 "auth": { 00:58:28.653 "dhgroup": "ffdhe4096", 00:58:28.653 "digest": "sha384", 00:58:28.653 "state": "completed" 00:58:28.653 }, 00:58:28.653 "cntlid": 75, 00:58:28.653 "listen_address": { 00:58:28.653 "adrfam": "IPv4", 00:58:28.653 "traddr": "10.0.0.2", 00:58:28.653 "trsvcid": "4420", 00:58:28.653 "trtype": "TCP" 00:58:28.653 }, 00:58:28.653 "peer_address": { 00:58:28.653 "adrfam": "IPv4", 00:58:28.653 "traddr": "10.0.0.1", 00:58:28.653 "trsvcid": "41680", 00:58:28.653 "trtype": "TCP" 00:58:28.653 }, 00:58:28.653 "qid": 0, 00:58:28.653 "state": "enabled", 00:58:28.653 "thread": "nvmf_tgt_poll_group_000" 00:58:28.653 } 00:58:28.653 ]' 00:58:28.653 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:28.912 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:29.171 10:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:29.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:58:29.738 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:29.995 10:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:29.996 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:29.996 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:30.260 00:58:30.260 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:30.260 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:30.260 10:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:30.260 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:30.260 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:30.260 10:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:30.260 10:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:30.518 { 00:58:30.518 "auth": { 00:58:30.518 "dhgroup": "ffdhe4096", 00:58:30.518 "digest": "sha384", 00:58:30.518 "state": "completed" 00:58:30.518 }, 00:58:30.518 "cntlid": 77, 00:58:30.518 "listen_address": { 00:58:30.518 "adrfam": "IPv4", 00:58:30.518 "traddr": "10.0.0.2", 00:58:30.518 "trsvcid": "4420", 00:58:30.518 "trtype": "TCP" 00:58:30.518 }, 00:58:30.518 "peer_address": { 00:58:30.518 "adrfam": "IPv4", 00:58:30.518 "traddr": "10.0.0.1", 00:58:30.518 "trsvcid": "41704", 00:58:30.518 "trtype": "TCP" 00:58:30.518 }, 00:58:30.518 "qid": 0, 00:58:30.518 "state": "enabled", 00:58:30.518 "thread": "nvmf_tgt_poll_group_000" 00:58:30.518 } 00:58:30.518 ]' 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:30.518 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:30.776 10:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:31.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:31.342 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:31.343 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:31.600 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:31.600 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:31.601 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:31.858 00:58:31.858 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:31.858 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:31.858 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:31.858 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:31.859 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:31.859 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:31.859 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:32.116 { 00:58:32.116 "auth": { 00:58:32.116 "dhgroup": "ffdhe4096", 00:58:32.116 "digest": "sha384", 00:58:32.116 "state": "completed" 00:58:32.116 }, 00:58:32.116 "cntlid": 79, 00:58:32.116 "listen_address": { 00:58:32.116 "adrfam": "IPv4", 00:58:32.116 "traddr": "10.0.0.2", 00:58:32.116 "trsvcid": "4420", 00:58:32.116 "trtype": "TCP" 00:58:32.116 }, 00:58:32.116 "peer_address": { 00:58:32.116 "adrfam": "IPv4", 00:58:32.116 "traddr": "10.0.0.1", 00:58:32.116 "trsvcid": "41724", 00:58:32.116 "trtype": "TCP" 00:58:32.116 }, 00:58:32.116 "qid": 0, 00:58:32.116 "state": "enabled", 00:58:32.116 "thread": "nvmf_tgt_poll_group_000" 00:58:32.116 } 00:58:32.116 ]' 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:32.116 10:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:32.373 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:32.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:32.940 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:33.197 10:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:33.453 00:58:33.453 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:33.453 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:33.453 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:33.712 { 00:58:33.712 "auth": { 00:58:33.712 "dhgroup": "ffdhe6144", 00:58:33.712 "digest": "sha384", 00:58:33.712 "state": "completed" 00:58:33.712 }, 00:58:33.712 "cntlid": 81, 00:58:33.712 "listen_address": { 00:58:33.712 "adrfam": "IPv4", 00:58:33.712 "traddr": "10.0.0.2", 00:58:33.712 "trsvcid": "4420", 00:58:33.712 "trtype": "TCP" 00:58:33.712 }, 00:58:33.712 "peer_address": { 00:58:33.712 "adrfam": "IPv4", 00:58:33.712 "traddr": "10.0.0.1", 00:58:33.712 "trsvcid": "41750", 00:58:33.712 "trtype": "TCP" 00:58:33.712 }, 00:58:33.712 "qid": 0, 00:58:33.712 "state": "enabled", 00:58:33.712 "thread": "nvmf_tgt_poll_group_000" 00:58:33.712 } 00:58:33.712 ]' 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:33.712 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:33.969 10:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:34.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:34.534 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:34.792 10:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:35.050 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:35.308 { 00:58:35.308 "auth": { 00:58:35.308 "dhgroup": "ffdhe6144", 00:58:35.308 "digest": "sha384", 00:58:35.308 "state": "completed" 00:58:35.308 }, 00:58:35.308 "cntlid": 83, 00:58:35.308 "listen_address": { 00:58:35.308 "adrfam": "IPv4", 00:58:35.308 "traddr": "10.0.0.2", 00:58:35.308 "trsvcid": "4420", 00:58:35.308 "trtype": "TCP" 00:58:35.308 }, 00:58:35.308 "peer_address": { 00:58:35.308 "adrfam": "IPv4", 00:58:35.308 "traddr": "10.0.0.1", 00:58:35.308 "trsvcid": "49968", 00:58:35.308 "trtype": "TCP" 00:58:35.308 }, 00:58:35.308 "qid": 0, 00:58:35.308 "state": "enabled", 00:58:35.308 "thread": "nvmf_tgt_poll_group_000" 00:58:35.308 } 00:58:35.308 ]' 00:58:35.308 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:35.567 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:35.826 10:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:36.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:36.392 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:36.393 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:36.981 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:36.981 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:36.981 { 00:58:36.981 "auth": { 00:58:36.981 "dhgroup": "ffdhe6144", 00:58:36.981 "digest": "sha384", 00:58:36.981 "state": "completed" 00:58:36.981 }, 00:58:36.981 "cntlid": 85, 00:58:36.981 "listen_address": { 00:58:36.981 "adrfam": "IPv4", 00:58:36.981 "traddr": "10.0.0.2", 00:58:36.981 "trsvcid": "4420", 00:58:36.981 "trtype": "TCP" 00:58:36.981 }, 00:58:36.981 "peer_address": { 00:58:36.981 "adrfam": "IPv4", 00:58:36.981 "traddr": "10.0.0.1", 00:58:36.981 "trsvcid": "50000", 00:58:36.981 "trtype": "TCP" 00:58:36.981 }, 00:58:36.981 "qid": 0, 00:58:36.982 "state": "enabled", 00:58:36.982 "thread": "nvmf_tgt_poll_group_000" 00:58:36.982 } 00:58:36.982 ]' 00:58:36.982 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:37.240 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:37.240 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:37.240 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:58:37.240 10:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:37.240 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:37.240 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:37.240 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:37.498 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:38.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:38.066 10:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:38.632 00:58:38.632 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:38.632 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:38.632 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:38.891 { 00:58:38.891 "auth": { 00:58:38.891 "dhgroup": "ffdhe6144", 00:58:38.891 "digest": "sha384", 00:58:38.891 "state": "completed" 00:58:38.891 }, 00:58:38.891 "cntlid": 87, 00:58:38.891 "listen_address": { 00:58:38.891 "adrfam": "IPv4", 00:58:38.891 "traddr": "10.0.0.2", 00:58:38.891 "trsvcid": "4420", 00:58:38.891 "trtype": "TCP" 00:58:38.891 }, 00:58:38.891 "peer_address": { 00:58:38.891 "adrfam": "IPv4", 00:58:38.891 "traddr": "10.0.0.1", 00:58:38.891 "trsvcid": "50020", 00:58:38.891 "trtype": "TCP" 00:58:38.891 }, 00:58:38.891 "qid": 0, 00:58:38.891 "state": "enabled", 00:58:38.891 "thread": "nvmf_tgt_poll_group_000" 00:58:38.891 } 00:58:38.891 ]' 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:38.891 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:39.148 10:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:39.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:39.714 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:39.973 10:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:40.540 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:40.540 10:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:40.799 { 00:58:40.799 "auth": { 00:58:40.799 "dhgroup": "ffdhe8192", 00:58:40.799 "digest": "sha384", 00:58:40.799 "state": "completed" 00:58:40.799 }, 00:58:40.799 "cntlid": 89, 00:58:40.799 "listen_address": { 00:58:40.799 "adrfam": "IPv4", 00:58:40.799 "traddr": "10.0.0.2", 00:58:40.799 "trsvcid": "4420", 00:58:40.799 "trtype": "TCP" 00:58:40.799 }, 00:58:40.799 "peer_address": { 00:58:40.799 "adrfam": "IPv4", 00:58:40.799 "traddr": "10.0.0.1", 00:58:40.799 "trsvcid": "50044", 00:58:40.799 "trtype": "TCP" 00:58:40.799 }, 00:58:40.799 "qid": 0, 00:58:40.799 "state": "enabled", 00:58:40.799 "thread": "nvmf_tgt_poll_group_000" 00:58:40.799 } 00:58:40.799 ]' 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:40.799 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:41.058 10:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:41.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:41.667 10:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:42.234 00:58:42.234 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:42.234 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:42.234 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:42.492 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:42.492 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:42.492 10:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:42.492 10:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:42.492 10:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:42.492 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:42.492 { 00:58:42.492 "auth": { 00:58:42.492 "dhgroup": "ffdhe8192", 00:58:42.492 "digest": "sha384", 00:58:42.492 "state": "completed" 00:58:42.492 }, 00:58:42.492 "cntlid": 91, 00:58:42.492 "listen_address": { 00:58:42.492 "adrfam": "IPv4", 00:58:42.492 "traddr": "10.0.0.2", 00:58:42.492 "trsvcid": "4420", 00:58:42.493 "trtype": "TCP" 00:58:42.493 }, 00:58:42.493 "peer_address": { 00:58:42.493 "adrfam": "IPv4", 00:58:42.493 "traddr": "10.0.0.1", 00:58:42.493 "trsvcid": "50074", 00:58:42.493 "trtype": "TCP" 00:58:42.493 }, 00:58:42.493 "qid": 0, 00:58:42.493 "state": "enabled", 00:58:42.493 "thread": "nvmf_tgt_poll_group_000" 00:58:42.493 } 00:58:42.493 ]' 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:42.493 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:42.752 10:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:43.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:43.321 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:43.579 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:44.145 00:58:44.145 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:44.145 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:44.145 10:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:44.404 { 00:58:44.404 "auth": { 00:58:44.404 "dhgroup": "ffdhe8192", 00:58:44.404 "digest": "sha384", 00:58:44.404 "state": "completed" 00:58:44.404 }, 00:58:44.404 "cntlid": 93, 00:58:44.404 "listen_address": { 00:58:44.404 "adrfam": "IPv4", 00:58:44.404 "traddr": "10.0.0.2", 00:58:44.404 "trsvcid": "4420", 00:58:44.404 "trtype": "TCP" 00:58:44.404 }, 00:58:44.404 "peer_address": { 00:58:44.404 "adrfam": "IPv4", 00:58:44.404 "traddr": "10.0.0.1", 00:58:44.404 "trsvcid": "50096", 00:58:44.404 "trtype": "TCP" 00:58:44.404 }, 00:58:44.404 "qid": 0, 00:58:44.404 "state": "enabled", 00:58:44.404 "thread": "nvmf_tgt_poll_group_000" 00:58:44.404 } 00:58:44.404 ]' 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:44.404 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:44.663 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:45.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:45.230 10:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:45.230 10:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:45.489 10:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:45.489 10:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:45.489 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:45.489 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:45.756 00:58:45.756 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:45.756 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:45.756 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:46.015 { 00:58:46.015 "auth": { 00:58:46.015 "dhgroup": "ffdhe8192", 00:58:46.015 "digest": "sha384", 00:58:46.015 "state": "completed" 00:58:46.015 }, 00:58:46.015 "cntlid": 95, 00:58:46.015 "listen_address": { 00:58:46.015 "adrfam": "IPv4", 00:58:46.015 "traddr": "10.0.0.2", 00:58:46.015 "trsvcid": "4420", 00:58:46.015 "trtype": "TCP" 00:58:46.015 }, 00:58:46.015 "peer_address": { 00:58:46.015 "adrfam": "IPv4", 00:58:46.015 "traddr": "10.0.0.1", 00:58:46.015 "trsvcid": "48354", 00:58:46.015 "trtype": "TCP" 00:58:46.015 }, 00:58:46.015 "qid": 0, 00:58:46.015 "state": "enabled", 00:58:46.015 "thread": "nvmf_tgt_poll_group_000" 00:58:46.015 } 00:58:46.015 ]' 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:46.015 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:58:46.273 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:46.273 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:58:46.273 10:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:46.273 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:46.273 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:46.273 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:46.531 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:47.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:47.097 10:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:47.355 00:58:47.355 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:47.355 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:47.355 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:47.614 { 00:58:47.614 "auth": { 00:58:47.614 "dhgroup": "null", 00:58:47.614 "digest": "sha512", 00:58:47.614 "state": "completed" 00:58:47.614 }, 00:58:47.614 "cntlid": 97, 00:58:47.614 "listen_address": { 00:58:47.614 "adrfam": "IPv4", 00:58:47.614 "traddr": "10.0.0.2", 00:58:47.614 "trsvcid": "4420", 00:58:47.614 "trtype": "TCP" 00:58:47.614 }, 00:58:47.614 "peer_address": { 00:58:47.614 "adrfam": "IPv4", 00:58:47.614 "traddr": "10.0.0.1", 00:58:47.614 "trsvcid": "48376", 00:58:47.614 "trtype": "TCP" 00:58:47.614 }, 00:58:47.614 "qid": 0, 00:58:47.614 "state": "enabled", 00:58:47.614 "thread": "nvmf_tgt_poll_group_000" 00:58:47.614 } 00:58:47.614 ]' 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:47.614 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:47.872 10:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:48.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:48.439 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:48.698 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:48.956 00:58:48.956 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:48.956 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:48.956 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:49.215 { 00:58:49.215 "auth": { 00:58:49.215 "dhgroup": "null", 00:58:49.215 "digest": "sha512", 00:58:49.215 "state": "completed" 00:58:49.215 }, 00:58:49.215 "cntlid": 99, 00:58:49.215 "listen_address": { 00:58:49.215 "adrfam": "IPv4", 00:58:49.215 "traddr": "10.0.0.2", 00:58:49.215 "trsvcid": "4420", 00:58:49.215 "trtype": "TCP" 00:58:49.215 }, 00:58:49.215 "peer_address": { 00:58:49.215 "adrfam": "IPv4", 00:58:49.215 "traddr": "10.0.0.1", 00:58:49.215 "trsvcid": "48402", 00:58:49.215 "trtype": "TCP" 00:58:49.215 }, 00:58:49.215 "qid": 0, 00:58:49.215 "state": "enabled", 00:58:49.215 "thread": "nvmf_tgt_poll_group_000" 00:58:49.215 } 00:58:49.215 ]' 00:58:49.215 10:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:49.215 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:49.473 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:50.040 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:50.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:50.041 10:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:50.298 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:50.557 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:50.557 10:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:50.815 { 00:58:50.815 "auth": { 00:58:50.815 "dhgroup": "null", 00:58:50.815 "digest": "sha512", 00:58:50.815 "state": "completed" 00:58:50.815 }, 00:58:50.815 "cntlid": 101, 00:58:50.815 "listen_address": { 00:58:50.815 "adrfam": "IPv4", 00:58:50.815 "traddr": "10.0.0.2", 00:58:50.815 "trsvcid": "4420", 00:58:50.815 "trtype": "TCP" 00:58:50.815 }, 00:58:50.815 "peer_address": { 00:58:50.815 "adrfam": "IPv4", 00:58:50.815 "traddr": "10.0.0.1", 00:58:50.815 "trsvcid": "48426", 00:58:50.815 "trtype": "TCP" 00:58:50.815 }, 00:58:50.815 "qid": 0, 00:58:50.815 "state": "enabled", 00:58:50.815 "thread": "nvmf_tgt_poll_group_000" 00:58:50.815 } 00:58:50.815 ]' 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:50.815 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:51.073 10:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:51.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:51.638 10:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:51.901 10:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:51.901 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:51.902 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:51.902 00:58:52.159 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:52.159 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:52.159 10:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:52.159 { 00:58:52.159 "auth": { 00:58:52.159 "dhgroup": "null", 00:58:52.159 "digest": "sha512", 00:58:52.159 "state": "completed" 00:58:52.159 }, 00:58:52.159 "cntlid": 103, 00:58:52.159 "listen_address": { 00:58:52.159 "adrfam": "IPv4", 00:58:52.159 "traddr": "10.0.0.2", 00:58:52.159 "trsvcid": "4420", 00:58:52.159 "trtype": "TCP" 00:58:52.159 }, 00:58:52.159 "peer_address": { 00:58:52.159 "adrfam": "IPv4", 00:58:52.159 "traddr": "10.0.0.1", 00:58:52.159 "trsvcid": "48450", 00:58:52.159 "trtype": "TCP" 00:58:52.159 }, 00:58:52.159 "qid": 0, 00:58:52.159 "state": "enabled", 00:58:52.159 "thread": "nvmf_tgt_poll_group_000" 00:58:52.159 } 00:58:52.159 ]' 00:58:52.159 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:52.417 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:52.675 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:53.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:53.240 10:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:53.240 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:53.498 00:58:53.498 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:53.498 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:53.498 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:53.756 { 00:58:53.756 "auth": { 00:58:53.756 "dhgroup": "ffdhe2048", 00:58:53.756 "digest": "sha512", 00:58:53.756 "state": "completed" 00:58:53.756 }, 00:58:53.756 "cntlid": 105, 00:58:53.756 "listen_address": { 00:58:53.756 "adrfam": "IPv4", 00:58:53.756 "traddr": "10.0.0.2", 00:58:53.756 "trsvcid": "4420", 00:58:53.756 "trtype": "TCP" 00:58:53.756 }, 00:58:53.756 "peer_address": { 00:58:53.756 "adrfam": "IPv4", 00:58:53.756 "traddr": "10.0.0.1", 00:58:53.756 "trsvcid": "48462", 00:58:53.756 "trtype": "TCP" 00:58:53.756 }, 00:58:53.756 "qid": 0, 00:58:53.756 "state": "enabled", 00:58:53.756 "thread": "nvmf_tgt_poll_group_000" 00:58:53.756 } 00:58:53.756 ]' 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:53.756 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:54.014 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:54.014 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:54.014 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:54.014 10:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:54.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:54.581 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:54.840 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:55.099 00:58:55.099 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:55.099 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:55.099 10:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:55.358 { 00:58:55.358 "auth": { 00:58:55.358 "dhgroup": "ffdhe2048", 00:58:55.358 "digest": "sha512", 00:58:55.358 "state": "completed" 00:58:55.358 }, 00:58:55.358 "cntlid": 107, 00:58:55.358 "listen_address": { 00:58:55.358 "adrfam": "IPv4", 00:58:55.358 "traddr": "10.0.0.2", 00:58:55.358 "trsvcid": "4420", 00:58:55.358 "trtype": "TCP" 00:58:55.358 }, 00:58:55.358 "peer_address": { 00:58:55.358 "adrfam": "IPv4", 00:58:55.358 "traddr": "10.0.0.1", 00:58:55.358 "trsvcid": "51906", 00:58:55.358 "trtype": "TCP" 00:58:55.358 }, 00:58:55.358 "qid": 0, 00:58:55.358 "state": "enabled", 00:58:55.358 "thread": "nvmf_tgt_poll_group_000" 00:58:55.358 } 00:58:55.358 ]' 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:55.358 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:55.616 10:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:56.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:56.182 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:56.441 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:56.712 00:58:56.712 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:56.712 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:56.712 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:56.970 { 00:58:56.970 "auth": { 00:58:56.970 "dhgroup": "ffdhe2048", 00:58:56.970 "digest": "sha512", 00:58:56.970 "state": "completed" 00:58:56.970 }, 00:58:56.970 "cntlid": 109, 00:58:56.970 "listen_address": { 00:58:56.970 "adrfam": "IPv4", 00:58:56.970 "traddr": "10.0.0.2", 00:58:56.970 "trsvcid": "4420", 00:58:56.970 "trtype": "TCP" 00:58:56.970 }, 00:58:56.970 "peer_address": { 00:58:56.970 "adrfam": "IPv4", 00:58:56.970 "traddr": "10.0.0.1", 00:58:56.970 "trsvcid": "51928", 00:58:56.970 "trtype": "TCP" 00:58:56.970 }, 00:58:56.970 "qid": 0, 00:58:56.970 "state": "enabled", 00:58:56.970 "thread": "nvmf_tgt_poll_group_000" 00:58:56.970 } 00:58:56.970 ]' 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:56.970 10:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:57.226 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:57.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:57.791 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:58.048 10:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:58:58.306 00:58:58.306 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:58.306 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:58.306 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:58.564 { 00:58:58.564 "auth": { 00:58:58.564 "dhgroup": "ffdhe2048", 00:58:58.564 "digest": "sha512", 00:58:58.564 "state": "completed" 00:58:58.564 }, 00:58:58.564 "cntlid": 111, 00:58:58.564 "listen_address": { 00:58:58.564 "adrfam": "IPv4", 00:58:58.564 "traddr": "10.0.0.2", 00:58:58.564 "trsvcid": "4420", 00:58:58.564 "trtype": "TCP" 00:58:58.564 }, 00:58:58.564 "peer_address": { 00:58:58.564 "adrfam": "IPv4", 00:58:58.564 "traddr": "10.0.0.1", 00:58:58.564 "trsvcid": "51942", 00:58:58.564 "trtype": "TCP" 00:58:58.564 }, 00:58:58.564 "qid": 0, 00:58:58.564 "state": "enabled", 00:58:58.564 "thread": "nvmf_tgt_poll_group_000" 00:58:58.564 } 00:58:58.564 ]' 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:58.564 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:58.824 10:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:59.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:59.480 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.738 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.996 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:58:59.996 { 00:58:59.996 "auth": { 00:58:59.996 "dhgroup": "ffdhe3072", 00:58:59.996 "digest": "sha512", 00:58:59.996 "state": "completed" 00:58:59.996 }, 00:58:59.996 "cntlid": 113, 00:58:59.996 "listen_address": { 00:58:59.996 "adrfam": "IPv4", 00:58:59.996 "traddr": "10.0.0.2", 00:58:59.996 "trsvcid": "4420", 00:58:59.996 "trtype": "TCP" 00:58:59.996 }, 00:58:59.996 "peer_address": { 00:58:59.996 "adrfam": "IPv4", 00:58:59.996 "traddr": "10.0.0.1", 00:58:59.996 "trsvcid": "51974", 00:58:59.996 "trtype": "TCP" 00:58:59.996 }, 00:58:59.996 "qid": 0, 00:58:59.996 "state": "enabled", 00:58:59.996 "thread": "nvmf_tgt_poll_group_000" 00:58:59.996 } 00:58:59.996 ]' 00:58:59.996 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:00.254 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:00.254 10:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:00.254 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:00.254 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:00.254 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:00.254 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:00.255 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:00.512 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:01.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:01.079 10:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:01.079 10:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:01.079 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.079 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.337 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:01.595 { 00:59:01.595 "auth": { 00:59:01.595 "dhgroup": "ffdhe3072", 00:59:01.595 "digest": "sha512", 00:59:01.595 "state": "completed" 00:59:01.595 }, 00:59:01.595 "cntlid": 115, 00:59:01.595 "listen_address": { 00:59:01.595 "adrfam": "IPv4", 00:59:01.595 "traddr": "10.0.0.2", 00:59:01.595 "trsvcid": "4420", 00:59:01.595 "trtype": "TCP" 00:59:01.595 }, 00:59:01.595 "peer_address": { 00:59:01.595 "adrfam": "IPv4", 00:59:01.595 "traddr": "10.0.0.1", 00:59:01.595 "trsvcid": "51994", 00:59:01.595 "trtype": "TCP" 00:59:01.595 }, 00:59:01.595 "qid": 0, 00:59:01.595 "state": "enabled", 00:59:01.595 "thread": "nvmf_tgt_poll_group_000" 00:59:01.595 } 00:59:01.595 ]' 00:59:01.595 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:01.870 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:01.871 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:01.871 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:01.871 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:01.871 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:01.871 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:01.871 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:02.128 10:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:02.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:02.694 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:02.952 00:59:02.952 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:02.952 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:02.952 10:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:03.210 { 00:59:03.210 "auth": { 00:59:03.210 "dhgroup": "ffdhe3072", 00:59:03.210 "digest": "sha512", 00:59:03.210 "state": "completed" 00:59:03.210 }, 00:59:03.210 "cntlid": 117, 00:59:03.210 "listen_address": { 00:59:03.210 "adrfam": "IPv4", 00:59:03.210 "traddr": "10.0.0.2", 00:59:03.210 "trsvcid": "4420", 00:59:03.210 "trtype": "TCP" 00:59:03.210 }, 00:59:03.210 "peer_address": { 00:59:03.210 "adrfam": "IPv4", 00:59:03.210 "traddr": "10.0.0.1", 00:59:03.210 "trsvcid": "52024", 00:59:03.210 "trtype": "TCP" 00:59:03.210 }, 00:59:03.210 "qid": 0, 00:59:03.210 "state": "enabled", 00:59:03.210 "thread": "nvmf_tgt_poll_group_000" 00:59:03.210 } 00:59:03.210 ]' 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:03.210 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:03.469 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:03.469 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:03.469 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:03.469 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:03.469 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:03.727 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:04.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:59:04.308 10:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:04.308 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:04.566 00:59:04.566 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:04.566 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:04.566 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:04.825 { 00:59:04.825 "auth": { 00:59:04.825 "dhgroup": "ffdhe3072", 00:59:04.825 "digest": "sha512", 00:59:04.825 "state": "completed" 00:59:04.825 }, 00:59:04.825 "cntlid": 119, 00:59:04.825 "listen_address": { 00:59:04.825 "adrfam": "IPv4", 00:59:04.825 "traddr": "10.0.0.2", 00:59:04.825 "trsvcid": "4420", 00:59:04.825 "trtype": "TCP" 00:59:04.825 }, 00:59:04.825 "peer_address": { 00:59:04.825 "adrfam": "IPv4", 00:59:04.825 "traddr": "10.0.0.1", 00:59:04.825 "trsvcid": "34788", 00:59:04.825 "trtype": "TCP" 00:59:04.825 }, 00:59:04.825 "qid": 0, 00:59:04.825 "state": "enabled", 00:59:04.825 "thread": "nvmf_tgt_poll_group_000" 00:59:04.825 } 00:59:04.825 ]' 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:04.825 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:05.083 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:05.083 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:05.083 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:05.083 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:05.083 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:05.083 10:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:05.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:05.649 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:05.907 10:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:06.165 00:59:06.165 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:06.165 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:06.165 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:06.423 { 00:59:06.423 "auth": { 00:59:06.423 "dhgroup": "ffdhe4096", 00:59:06.423 "digest": "sha512", 00:59:06.423 "state": "completed" 00:59:06.423 }, 00:59:06.423 "cntlid": 121, 00:59:06.423 "listen_address": { 00:59:06.423 "adrfam": "IPv4", 00:59:06.423 "traddr": "10.0.0.2", 00:59:06.423 "trsvcid": "4420", 00:59:06.423 "trtype": "TCP" 00:59:06.423 }, 00:59:06.423 "peer_address": { 00:59:06.423 "adrfam": "IPv4", 00:59:06.423 "traddr": "10.0.0.1", 00:59:06.423 "trsvcid": "34818", 00:59:06.423 "trtype": "TCP" 00:59:06.423 }, 00:59:06.423 "qid": 0, 00:59:06.423 "state": "enabled", 00:59:06.423 "thread": "nvmf_tgt_poll_group_000" 00:59:06.423 } 00:59:06.423 ]' 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:06.423 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:06.681 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:06.681 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:06.681 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:06.681 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:06.681 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:06.681 10:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:59:07.248 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:07.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:07.248 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:07.248 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:07.248 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:07.248 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:07.249 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:07.249 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:07.249 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:07.507 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:07.508 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:07.766 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:08.024 { 00:59:08.024 "auth": { 00:59:08.024 "dhgroup": "ffdhe4096", 00:59:08.024 "digest": "sha512", 00:59:08.024 "state": "completed" 00:59:08.024 }, 00:59:08.024 "cntlid": 123, 00:59:08.024 "listen_address": { 00:59:08.024 "adrfam": "IPv4", 00:59:08.024 "traddr": "10.0.0.2", 00:59:08.024 "trsvcid": "4420", 00:59:08.024 "trtype": "TCP" 00:59:08.024 }, 00:59:08.024 "peer_address": { 00:59:08.024 "adrfam": "IPv4", 00:59:08.024 "traddr": "10.0.0.1", 00:59:08.024 "trsvcid": "34850", 00:59:08.024 "trtype": "TCP" 00:59:08.024 }, 00:59:08.024 "qid": 0, 00:59:08.024 "state": "enabled", 00:59:08.024 "thread": "nvmf_tgt_poll_group_000" 00:59:08.024 } 00:59:08.024 ]' 00:59:08.024 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:08.282 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:08.282 10:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:08.282 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:08.282 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:08.282 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:08.282 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:08.282 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:08.540 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:09.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:09.106 10:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:09.106 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:09.673 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:09.674 { 00:59:09.674 "auth": { 00:59:09.674 "dhgroup": "ffdhe4096", 00:59:09.674 "digest": "sha512", 00:59:09.674 "state": "completed" 00:59:09.674 }, 00:59:09.674 "cntlid": 125, 00:59:09.674 "listen_address": { 00:59:09.674 "adrfam": "IPv4", 00:59:09.674 "traddr": "10.0.0.2", 00:59:09.674 "trsvcid": "4420", 00:59:09.674 "trtype": "TCP" 00:59:09.674 }, 00:59:09.674 "peer_address": { 00:59:09.674 "adrfam": "IPv4", 00:59:09.674 "traddr": "10.0.0.1", 00:59:09.674 "trsvcid": "34894", 00:59:09.674 "trtype": "TCP" 00:59:09.674 }, 00:59:09.674 "qid": 0, 00:59:09.674 "state": "enabled", 00:59:09.674 "thread": "nvmf_tgt_poll_group_000" 00:59:09.674 } 00:59:09.674 ]' 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:09.674 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:09.933 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:09.933 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:09.933 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:09.933 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:09.933 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:10.192 10:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:10.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:10.759 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:11.017 00:59:11.275 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:11.275 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:11.275 10:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:11.275 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:11.275 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:11.275 10:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:11.275 10:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:11.275 10:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:11.275 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:11.275 { 00:59:11.275 "auth": { 00:59:11.275 "dhgroup": "ffdhe4096", 00:59:11.275 "digest": "sha512", 00:59:11.275 "state": "completed" 00:59:11.275 }, 00:59:11.275 "cntlid": 127, 00:59:11.275 "listen_address": { 00:59:11.275 "adrfam": "IPv4", 00:59:11.275 "traddr": "10.0.0.2", 00:59:11.275 "trsvcid": "4420", 00:59:11.275 "trtype": "TCP" 00:59:11.275 }, 00:59:11.275 "peer_address": { 00:59:11.276 "adrfam": "IPv4", 00:59:11.276 "traddr": "10.0.0.1", 00:59:11.276 "trsvcid": "34930", 00:59:11.276 "trtype": "TCP" 00:59:11.276 }, 00:59:11.276 "qid": 0, 00:59:11.276 "state": "enabled", 00:59:11.276 "thread": "nvmf_tgt_poll_group_000" 00:59:11.276 } 00:59:11.276 ]' 00:59:11.276 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:11.534 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:11.534 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:11.534 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:11.534 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:11.535 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:11.535 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:11.535 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:11.793 10:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:59:12.369 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:12.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:12.370 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:12.937 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:12.937 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:12.937 { 00:59:12.937 "auth": { 00:59:12.937 "dhgroup": "ffdhe6144", 00:59:12.937 "digest": "sha512", 00:59:12.937 "state": "completed" 00:59:12.937 }, 00:59:12.937 "cntlid": 129, 00:59:12.937 "listen_address": { 00:59:12.937 "adrfam": "IPv4", 00:59:12.937 "traddr": "10.0.0.2", 00:59:12.938 "trsvcid": "4420", 00:59:12.938 "trtype": "TCP" 00:59:12.938 }, 00:59:12.938 "peer_address": { 00:59:12.938 "adrfam": "IPv4", 00:59:12.938 "traddr": "10.0.0.1", 00:59:12.938 "trsvcid": "34962", 00:59:12.938 "trtype": "TCP" 00:59:12.938 }, 00:59:12.938 "qid": 0, 00:59:12.938 "state": "enabled", 00:59:12.938 "thread": "nvmf_tgt_poll_group_000" 00:59:12.938 } 00:59:12.938 ]' 00:59:12.938 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:13.196 10:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:13.455 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:14.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:14.025 10:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:14.593 00:59:14.593 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:14.593 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:14.593 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:14.851 { 00:59:14.851 "auth": { 00:59:14.851 "dhgroup": "ffdhe6144", 00:59:14.851 "digest": "sha512", 00:59:14.851 "state": "completed" 00:59:14.851 }, 00:59:14.851 "cntlid": 131, 00:59:14.851 "listen_address": { 00:59:14.851 "adrfam": "IPv4", 00:59:14.851 "traddr": "10.0.0.2", 00:59:14.851 "trsvcid": "4420", 00:59:14.851 "trtype": "TCP" 00:59:14.851 }, 00:59:14.851 "peer_address": { 00:59:14.851 "adrfam": "IPv4", 00:59:14.851 "traddr": "10.0.0.1", 00:59:14.851 "trsvcid": "34982", 00:59:14.851 "trtype": "TCP" 00:59:14.851 }, 00:59:14.851 "qid": 0, 00:59:14.851 "state": "enabled", 00:59:14.851 "thread": "nvmf_tgt_poll_group_000" 00:59:14.851 } 00:59:14.851 ]' 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:14.851 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:15.109 10:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:15.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:15.676 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:15.934 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:16.192 00:59:16.192 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:16.192 10:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:16.192 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:16.451 { 00:59:16.451 "auth": { 00:59:16.451 "dhgroup": "ffdhe6144", 00:59:16.451 "digest": "sha512", 00:59:16.451 "state": "completed" 00:59:16.451 }, 00:59:16.451 "cntlid": 133, 00:59:16.451 "listen_address": { 00:59:16.451 "adrfam": "IPv4", 00:59:16.451 "traddr": "10.0.0.2", 00:59:16.451 "trsvcid": "4420", 00:59:16.451 "trtype": "TCP" 00:59:16.451 }, 00:59:16.451 "peer_address": { 00:59:16.451 "adrfam": "IPv4", 00:59:16.451 "traddr": "10.0.0.1", 00:59:16.451 "trsvcid": "56514", 00:59:16.451 "trtype": "TCP" 00:59:16.451 }, 00:59:16.451 "qid": 0, 00:59:16.451 "state": "enabled", 00:59:16.451 "thread": "nvmf_tgt_poll_group_000" 00:59:16.451 } 00:59:16.451 ]' 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:16.451 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:16.710 10:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:17.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:17.278 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:17.538 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:17.796 00:59:17.796 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:17.796 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:17.796 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:18.055 { 00:59:18.055 "auth": { 00:59:18.055 "dhgroup": "ffdhe6144", 00:59:18.055 "digest": "sha512", 00:59:18.055 "state": "completed" 00:59:18.055 }, 00:59:18.055 "cntlid": 135, 00:59:18.055 "listen_address": { 00:59:18.055 "adrfam": "IPv4", 00:59:18.055 "traddr": "10.0.0.2", 00:59:18.055 "trsvcid": "4420", 00:59:18.055 "trtype": "TCP" 00:59:18.055 }, 00:59:18.055 "peer_address": { 00:59:18.055 "adrfam": "IPv4", 00:59:18.055 "traddr": "10.0.0.1", 00:59:18.055 "trsvcid": "56528", 00:59:18.055 "trtype": "TCP" 00:59:18.055 }, 00:59:18.055 "qid": 0, 00:59:18.055 "state": "enabled", 00:59:18.055 "thread": "nvmf_tgt_poll_group_000" 00:59:18.055 } 00:59:18.055 ]' 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:18.055 10:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:18.312 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:18.312 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:18.312 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:18.313 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:18.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:18.878 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:19.137 10:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:19.703 00:59:19.703 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:19.703 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:19.703 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:19.961 { 00:59:19.961 "auth": { 00:59:19.961 "dhgroup": "ffdhe8192", 00:59:19.961 "digest": "sha512", 00:59:19.961 "state": "completed" 00:59:19.961 }, 00:59:19.961 "cntlid": 137, 00:59:19.961 "listen_address": { 00:59:19.961 "adrfam": "IPv4", 00:59:19.961 "traddr": "10.0.0.2", 00:59:19.961 "trsvcid": "4420", 00:59:19.961 "trtype": "TCP" 00:59:19.961 }, 00:59:19.961 "peer_address": { 00:59:19.961 "adrfam": "IPv4", 00:59:19.961 "traddr": "10.0.0.1", 00:59:19.961 "trsvcid": "56564", 00:59:19.961 "trtype": "TCP" 00:59:19.961 }, 00:59:19.961 "qid": 0, 00:59:19.961 "state": "enabled", 00:59:19.961 "thread": "nvmf_tgt_poll_group_000" 00:59:19.961 } 00:59:19.961 ]' 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:19.961 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:20.218 10:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:20.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:20.784 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:21.043 10:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:21.300 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:21.558 { 00:59:21.558 "auth": { 00:59:21.558 "dhgroup": "ffdhe8192", 00:59:21.558 "digest": "sha512", 00:59:21.558 "state": "completed" 00:59:21.558 }, 00:59:21.558 "cntlid": 139, 00:59:21.558 "listen_address": { 00:59:21.558 "adrfam": "IPv4", 00:59:21.558 "traddr": "10.0.0.2", 00:59:21.558 "trsvcid": "4420", 00:59:21.558 "trtype": "TCP" 00:59:21.558 }, 00:59:21.558 "peer_address": { 00:59:21.558 "adrfam": "IPv4", 00:59:21.558 "traddr": "10.0.0.1", 00:59:21.558 "trsvcid": "56600", 00:59:21.558 "trtype": "TCP" 00:59:21.558 }, 00:59:21.558 "qid": 0, 00:59:21.558 "state": "enabled", 00:59:21.558 "thread": "nvmf_tgt_poll_group_000" 00:59:21.558 } 00:59:21.558 ]' 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:21.558 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:21.866 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:21.866 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:21.866 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:21.866 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:21.866 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:21.866 10:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:01:ODlkZWYzZmRjZjJlYmNjODI5ODQwOTFlZjczZjg2NzQDO+Qg: --dhchap-ctrl-secret DHHC-1:02:YjIzNmU4MjNiZjAyZGY0NmJlNjY0YzA3MDdhOWRiNGVjODc4YjZkNWY4NTQ0YmE2UTBGjQ==: 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:22.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:22.463 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:22.721 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:23.284 00:59:23.284 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:23.284 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:23.285 10:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:23.285 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:23.285 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:23.285 10:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:23.285 10:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:23.285 10:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:23.542 { 00:59:23.542 "auth": { 00:59:23.542 "dhgroup": "ffdhe8192", 00:59:23.542 "digest": "sha512", 00:59:23.542 "state": "completed" 00:59:23.542 }, 00:59:23.542 "cntlid": 141, 00:59:23.542 "listen_address": { 00:59:23.542 "adrfam": "IPv4", 00:59:23.542 "traddr": "10.0.0.2", 00:59:23.542 "trsvcid": "4420", 00:59:23.542 "trtype": "TCP" 00:59:23.542 }, 00:59:23.542 "peer_address": { 00:59:23.542 "adrfam": "IPv4", 00:59:23.542 "traddr": "10.0.0.1", 00:59:23.542 "trsvcid": "56610", 00:59:23.542 "trtype": "TCP" 00:59:23.542 }, 00:59:23.542 "qid": 0, 00:59:23.542 "state": "enabled", 00:59:23.542 "thread": "nvmf_tgt_poll_group_000" 00:59:23.542 } 00:59:23.542 ]' 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:23.542 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:23.799 10:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:02:YTE0ODdkMDViNzZlMzgyNTIzNDEwOTA4ZjdlNDY5NTlmMzM0YzlhMmI1MGZiZGYxHw6d1A==: --dhchap-ctrl-secret DHHC-1:01:MjVhNTQwZDY5ZWJiMmZhNTYxZGYxYTE1YWY4ZGQ0NDitHJyT: 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:24.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:24.362 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:24.618 10:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:24.619 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:24.619 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:25.183 00:59:25.183 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:25.183 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:25.183 10:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:25.183 { 00:59:25.183 "auth": { 00:59:25.183 "dhgroup": "ffdhe8192", 00:59:25.183 "digest": "sha512", 00:59:25.183 "state": "completed" 00:59:25.183 }, 00:59:25.183 "cntlid": 143, 00:59:25.183 "listen_address": { 00:59:25.183 "adrfam": "IPv4", 00:59:25.183 "traddr": "10.0.0.2", 00:59:25.183 "trsvcid": "4420", 00:59:25.183 "trtype": "TCP" 00:59:25.183 }, 00:59:25.183 "peer_address": { 00:59:25.183 "adrfam": "IPv4", 00:59:25.183 "traddr": "10.0.0.1", 00:59:25.183 "trsvcid": "46744", 00:59:25.183 "trtype": "TCP" 00:59:25.183 }, 00:59:25.183 "qid": 0, 00:59:25.183 "state": "enabled", 00:59:25.183 "thread": "nvmf_tgt_poll_group_000" 00:59:25.183 } 00:59:25.183 ]' 00:59:25.183 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:25.439 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:25.696 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:26.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:59:26.261 10:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:26.261 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:26.825 00:59:26.825 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:26.825 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:26.825 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:27.082 { 00:59:27.082 "auth": { 00:59:27.082 "dhgroup": "ffdhe8192", 00:59:27.082 "digest": "sha512", 00:59:27.082 "state": "completed" 00:59:27.082 }, 00:59:27.082 "cntlid": 145, 00:59:27.082 "listen_address": { 00:59:27.082 "adrfam": "IPv4", 00:59:27.082 "traddr": "10.0.0.2", 00:59:27.082 "trsvcid": "4420", 00:59:27.082 "trtype": "TCP" 00:59:27.082 }, 00:59:27.082 "peer_address": { 00:59:27.082 "adrfam": "IPv4", 00:59:27.082 "traddr": "10.0.0.1", 00:59:27.082 "trsvcid": "46760", 00:59:27.082 "trtype": "TCP" 00:59:27.082 }, 00:59:27.082 "qid": 0, 00:59:27.082 "state": "enabled", 00:59:27.082 "thread": "nvmf_tgt_poll_group_000" 00:59:27.082 } 00:59:27.082 ]' 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:27.082 10:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:27.340 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:27.340 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:27.340 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:27.340 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:00:MTA3ZDE1ZjEwZDRhNzllN2U3NWZmMmU4M2ZmMmVkZTkzNmRiZWNmZmUxMDEyZGI0FSGG2Q==: --dhchap-ctrl-secret DHHC-1:03:MGNkYTAzMTcwMTA3MTgzMDRhZWM5N2M4Yjk5NjhmZjlhZDNiMGQzOTU5OTMwY2I3MTI2Y2QyNzk3YmI4MGU3ME42o24=: 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:27.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:59:27.905 10:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:59:28.470 2024/07/22 10:54:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:59:28.471 request: 00:59:28.471 { 00:59:28.471 "method": "bdev_nvme_attach_controller", 00:59:28.471 "params": { 00:59:28.471 "name": "nvme0", 00:59:28.471 "trtype": "tcp", 00:59:28.471 "traddr": "10.0.0.2", 00:59:28.471 "adrfam": "ipv4", 00:59:28.471 "trsvcid": "4420", 00:59:28.471 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:59:28.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7", 00:59:28.471 "prchk_reftag": false, 00:59:28.471 "prchk_guard": false, 00:59:28.471 "hdgst": false, 00:59:28.471 "ddgst": false, 00:59:28.471 "dhchap_key": "key2" 00:59:28.471 } 00:59:28.471 } 00:59:28.471 Got JSON-RPC error response 00:59:28.471 GoRPCClient: error on JSON-RPC call 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:59:28.471 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:59:29.036 2024/07/22 10:54:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:59:29.036 request: 00:59:29.036 { 00:59:29.036 "method": "bdev_nvme_attach_controller", 00:59:29.036 "params": { 00:59:29.036 "name": "nvme0", 00:59:29.036 "trtype": "tcp", 00:59:29.036 "traddr": "10.0.0.2", 00:59:29.036 "adrfam": "ipv4", 00:59:29.036 "trsvcid": "4420", 00:59:29.036 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:59:29.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7", 00:59:29.036 "prchk_reftag": false, 00:59:29.036 "prchk_guard": false, 00:59:29.036 "hdgst": false, 00:59:29.036 "ddgst": false, 00:59:29.036 "dhchap_key": "key1", 00:59:29.036 "dhchap_ctrlr_key": "ckey2" 00:59:29.036 } 00:59:29.036 } 00:59:29.036 Got JSON-RPC error response 00:59:29.036 GoRPCClient: error on JSON-RPC call 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key1 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:29.036 10:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:29.602 2024/07/22 10:54:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:59:29.602 request: 00:59:29.602 { 00:59:29.602 "method": "bdev_nvme_attach_controller", 00:59:29.602 "params": { 00:59:29.602 "name": "nvme0", 00:59:29.602 "trtype": "tcp", 00:59:29.602 "traddr": "10.0.0.2", 00:59:29.602 "adrfam": "ipv4", 00:59:29.602 "trsvcid": "4420", 00:59:29.602 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:59:29.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7", 00:59:29.602 "prchk_reftag": false, 00:59:29.602 "prchk_guard": false, 00:59:29.602 "hdgst": false, 00:59:29.602 "ddgst": false, 00:59:29.602 "dhchap_key": "key1", 00:59:29.602 "dhchap_ctrlr_key": "ckey1" 00:59:29.602 } 00:59:29.602 } 00:59:29.602 Got JSON-RPC error response 00:59:29.602 GoRPCClient: error on JSON-RPC call 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 94425 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 94425 ']' 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 94425 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94425 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:59:29.602 killing process with pid 94425 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94425' 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 94425 00:59:29.602 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 94425 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=98939 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 98939 00:59:29.860 10:54:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:59:29.861 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 98939 ']' 00:59:29.861 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:29.861 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:59:29.861 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:29.861 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:59:29.861 10:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 98939 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 98939 ']' 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:59:30.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:30.797 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:31.056 10:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:31.623 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:31.623 10:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:59:31.882 { 00:59:31.882 "auth": { 00:59:31.882 "dhgroup": "ffdhe8192", 00:59:31.882 "digest": "sha512", 00:59:31.882 "state": "completed" 00:59:31.882 }, 00:59:31.882 "cntlid": 1, 00:59:31.882 "listen_address": { 00:59:31.882 "adrfam": "IPv4", 00:59:31.882 "traddr": "10.0.0.2", 00:59:31.882 "trsvcid": "4420", 00:59:31.882 "trtype": "TCP" 00:59:31.882 }, 00:59:31.882 "peer_address": { 00:59:31.882 "adrfam": "IPv4", 00:59:31.882 "traddr": "10.0.0.1", 00:59:31.882 "trsvcid": "46834", 00:59:31.882 "trtype": "TCP" 00:59:31.882 }, 00:59:31.882 "qid": 0, 00:59:31.882 "state": "enabled", 00:59:31.882 "thread": "nvmf_tgt_poll_group_000" 00:59:31.882 } 00:59:31.882 ]' 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:31.882 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:32.141 10:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid 5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-secret DHHC-1:03:YThiYWJkZGQ4Nzg0MTcxMGU0ODhiMzJjMzcxNjUwMGM0Yzk1ZDlhYzg2YjZkMDJkZDczNmU2ZjU5OGMzODUzM4p+FzM=: 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:32.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --dhchap-key key3 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:59:32.708 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:32.967 2024/07/22 10:54:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:59:32.967 request: 00:59:32.967 { 00:59:32.967 "method": "bdev_nvme_attach_controller", 00:59:32.967 "params": { 00:59:32.967 "name": "nvme0", 00:59:32.967 "trtype": "tcp", 00:59:32.967 "traddr": "10.0.0.2", 00:59:32.967 "adrfam": "ipv4", 00:59:32.967 "trsvcid": "4420", 00:59:32.967 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:59:32.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7", 00:59:32.967 "prchk_reftag": false, 00:59:32.967 "prchk_guard": false, 00:59:32.967 "hdgst": false, 00:59:32.967 "ddgst": false, 00:59:32.967 "dhchap_key": "key3" 00:59:32.967 } 00:59:32.967 } 00:59:32.967 Got JSON-RPC error response 00:59:32.967 GoRPCClient: error on JSON-RPC call 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:59:32.967 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:59:32.968 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:59:32.968 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:59:32.968 10:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:33.227 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:59:33.486 2024/07/22 10:54:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:59:33.486 request: 00:59:33.486 { 00:59:33.486 "method": "bdev_nvme_attach_controller", 00:59:33.486 "params": { 00:59:33.486 "name": "nvme0", 00:59:33.486 "trtype": "tcp", 00:59:33.486 "traddr": "10.0.0.2", 00:59:33.486 "adrfam": "ipv4", 00:59:33.486 "trsvcid": "4420", 00:59:33.486 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:59:33.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7", 00:59:33.486 "prchk_reftag": false, 00:59:33.486 "prchk_guard": false, 00:59:33.486 "hdgst": false, 00:59:33.486 "ddgst": false, 00:59:33.486 "dhchap_key": "key3" 00:59:33.486 } 00:59:33.486 } 00:59:33.486 Got JSON-RPC error response 00:59:33.486 GoRPCClient: error on JSON-RPC call 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:59:33.486 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:59:33.487 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:59:33.746 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:59:34.005 2024/07/22 10:54:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:59:34.005 request: 00:59:34.005 { 00:59:34.005 "method": "bdev_nvme_attach_controller", 00:59:34.005 "params": { 00:59:34.005 "name": "nvme0", 00:59:34.005 "trtype": "tcp", 00:59:34.005 "traddr": "10.0.0.2", 00:59:34.005 "adrfam": "ipv4", 00:59:34.005 "trsvcid": "4420", 00:59:34.005 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:59:34.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7", 00:59:34.005 "prchk_reftag": false, 00:59:34.005 "prchk_guard": false, 00:59:34.005 "hdgst": false, 00:59:34.005 "ddgst": false, 00:59:34.005 "dhchap_key": "key0", 00:59:34.005 "dhchap_ctrlr_key": "key1" 00:59:34.005 } 00:59:34.005 } 00:59:34.005 Got JSON-RPC error response 00:59:34.005 GoRPCClient: error on JSON-RPC call 00:59:34.005 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:59:34.005 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:59:34.005 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:59:34.005 10:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:59:34.005 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:59:34.005 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:59:34.262 00:59:34.262 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:59:34.262 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:59:34.262 10:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 94470 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 94470 ']' 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 94470 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94470 00:59:34.519 killing process with pid 94470 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94470' 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 94470 00:59:34.519 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 94470 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:59:35.113 rmmod nvme_tcp 00:59:35.113 rmmod nvme_fabrics 00:59:35.113 rmmod nvme_keyring 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 98939 ']' 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 98939 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 98939 ']' 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 98939 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98939 00:59:35.113 killing process with pid 98939 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98939' 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 98939 00:59:35.113 10:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 98939 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gcv /tmp/spdk.key-sha256.NOG /tmp/spdk.key-sha384.ZNv /tmp/spdk.key-sha512.N4R /tmp/spdk.key-sha512.iVy /tmp/spdk.key-sha384.Yc8 /tmp/spdk.key-sha256.31O '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:59:35.372 00:59:35.372 real 2m13.875s 00:59:35.372 user 5m12.112s 00:59:35.372 sys 0m25.448s 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:35.372 10:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:35.372 ************************************ 00:59:35.372 END TEST nvmf_auth_target 00:59:35.372 ************************************ 00:59:35.372 10:54:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:59:35.372 10:54:43 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:59:35.372 10:54:43 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:59:35.372 10:54:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:59:35.372 10:54:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:35.372 10:54:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:35.372 ************************************ 00:59:35.372 START TEST nvmf_bdevio_no_huge 00:59:35.372 ************************************ 00:59:35.372 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:59:35.630 * Looking for test storage... 00:59:35.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:59:35.630 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:35.630 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:59:35.630 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:35.630 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:59:35.631 Cannot find device "nvmf_tgt_br" 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:59:35.631 Cannot find device "nvmf_tgt_br2" 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:59:35.631 Cannot find device "nvmf_tgt_br" 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:59:35.631 Cannot find device "nvmf_tgt_br2" 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:59:35.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:59:35.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:59:35.631 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:59:35.889 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:59:35.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:35.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:59:35.890 00:59:35.890 --- 10.0.0.2 ping statistics --- 00:59:35.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:35.890 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:59:35.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:59:35.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:59:35.890 00:59:35.890 --- 10.0.0.3 ping statistics --- 00:59:35.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:35.890 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:59:35.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:35.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:59:35.890 00:59:35.890 --- 10.0.0.1 ping statistics --- 00:59:35.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:35.890 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=99330 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 99330 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 99330 ']' 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:35.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:59:35.890 10:54:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.148 [2024-07-22 10:54:43.848054] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:59:36.148 [2024-07-22 10:54:43.848122] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:59:36.148 [2024-07-22 10:54:43.985797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:59:36.148 [2024-07-22 10:54:43.990662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:36.405 [2024-07-22 10:54:44.097694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:36.405 [2024-07-22 10:54:44.097738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:36.405 [2024-07-22 10:54:44.097762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:36.405 [2024-07-22 10:54:44.097770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:36.405 [2024-07-22 10:54:44.097777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:36.405 [2024-07-22 10:54:44.098179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:59:36.405 [2024-07-22 10:54:44.098340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:59:36.405 [2024-07-22 10:54:44.098642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:59:36.405 [2024-07-22 10:54:44.098524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.969 [2024-07-22 10:54:44.758372] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.969 Malloc0 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:36.969 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:36.970 [2024-07-22 10:54:44.812003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:59:36.970 { 00:59:36.970 "params": { 00:59:36.970 "name": "Nvme$subsystem", 00:59:36.970 "trtype": "$TEST_TRANSPORT", 00:59:36.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:36.970 "adrfam": "ipv4", 00:59:36.970 "trsvcid": "$NVMF_PORT", 00:59:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:36.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:36.970 "hdgst": ${hdgst:-false}, 00:59:36.970 "ddgst": ${ddgst:-false} 00:59:36.970 }, 00:59:36.970 "method": "bdev_nvme_attach_controller" 00:59:36.970 } 00:59:36.970 EOF 00:59:36.970 )") 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:59:36.970 10:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:59:36.970 "params": { 00:59:36.970 "name": "Nvme1", 00:59:36.970 "trtype": "tcp", 00:59:36.970 "traddr": "10.0.0.2", 00:59:36.970 "adrfam": "ipv4", 00:59:36.970 "trsvcid": "4420", 00:59:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:59:36.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:59:36.970 "hdgst": false, 00:59:36.970 "ddgst": false 00:59:36.970 }, 00:59:36.970 "method": "bdev_nvme_attach_controller" 00:59:36.970 }' 00:59:36.970 [2024-07-22 10:54:44.867056] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:59:36.970 [2024-07-22 10:54:44.867120] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid99384 ] 00:59:37.228 [2024-07-22 10:54:44.996566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:59:37.228 [2024-07-22 10:54:45.000684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:59:37.228 [2024-07-22 10:54:45.126082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:59:37.228 [2024-07-22 10:54:45.128308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:59:37.228 [2024-07-22 10:54:45.128311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:59:37.485 I/O targets: 00:59:37.485 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:59:37.485 00:59:37.485 00:59:37.485 CUnit - A unit testing framework for C - Version 2.1-3 00:59:37.485 http://cunit.sourceforge.net/ 00:59:37.485 00:59:37.485 00:59:37.485 Suite: bdevio tests on: Nvme1n1 00:59:37.485 Test: blockdev write read block ...passed 00:59:37.485 Test: blockdev write zeroes read block ...passed 00:59:37.485 Test: blockdev write zeroes read no split ...passed 00:59:37.485 Test: blockdev write zeroes read split ...passed 00:59:37.744 Test: blockdev write zeroes read split partial ...passed 00:59:37.744 Test: blockdev reset ...[2024-07-22 10:54:45.419275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:59:37.744 [2024-07-22 10:54:45.419364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b9090 (9): Bad file descriptor 00:59:37.744 [2024-07-22 10:54:45.438412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:59:37.744 passed 00:59:37.744 Test: blockdev write read 8 blocks ...passed 00:59:37.744 Test: blockdev write read size > 128k ...passed 00:59:37.744 Test: blockdev write read invalid size ...passed 00:59:37.744 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:59:37.744 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:59:37.744 Test: blockdev write read max offset ...passed 00:59:37.744 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:59:37.744 Test: blockdev writev readv 8 blocks ...passed 00:59:37.744 Test: blockdev writev readv 30 x 1block ...passed 00:59:37.744 Test: blockdev writev readv block ...passed 00:59:37.744 Test: blockdev writev readv size > 128k ...passed 00:59:37.744 Test: blockdev writev readv size > 128k in two iovs ...passed 00:59:37.744 Test: blockdev comparev and writev ...[2024-07-22 10:54:45.608773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.608816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.608832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.608843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.609199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.609217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.609230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.609239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.609495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.609513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.609534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.609543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.609807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.609824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:59:37.744 [2024-07-22 10:54:45.609837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:59:37.744 [2024-07-22 10:54:45.609847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:59:37.744 passed 00:59:38.002 Test: blockdev nvme passthru rw ...passed 00:59:38.002 Test: blockdev nvme passthru vendor specific ...[2024-07-22 10:54:45.692601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:59:38.002 [2024-07-22 10:54:45.692633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:59:38.002 [2024-07-22 10:54:45.692725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:59:38.002 [2024-07-22 10:54:45.692736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:59:38.002 [2024-07-22 10:54:45.692815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:59:38.002 [2024-07-22 10:54:45.692825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:59:38.002 passed 00:59:38.002 Test: blockdev nvme admin passthru ...[2024-07-22 10:54:45.692904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:59:38.002 [2024-07-22 10:54:45.692914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:59:38.002 passed 00:59:38.002 Test: blockdev copy ...passed 00:59:38.002 00:59:38.002 Run Summary: Type Total Ran Passed Failed Inactive 00:59:38.002 suites 1 1 n/a 0 0 00:59:38.002 tests 23 23 23 0 0 00:59:38.002 asserts 152 152 152 0 n/a 00:59:38.002 00:59:38.002 Elapsed time = 0.932 seconds 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:59:38.260 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:59:38.260 rmmod nvme_tcp 00:59:38.260 rmmod nvme_fabrics 00:59:38.260 rmmod nvme_keyring 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 99330 ']' 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 99330 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 99330 ']' 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 99330 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99330 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:59:38.517 killing process with pid 99330 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99330' 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 99330 00:59:38.517 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 99330 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:59:38.774 00:59:38.774 real 0m3.485s 00:59:38.774 user 0m11.736s 00:59:38.774 sys 0m1.478s 00:59:38.774 ************************************ 00:59:38.774 END TEST nvmf_bdevio_no_huge 00:59:38.774 ************************************ 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:38.774 10:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:59:39.032 10:54:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:59:39.032 10:54:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:59:39.032 10:54:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:59:39.032 10:54:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:39.032 10:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:39.033 ************************************ 00:59:39.033 START TEST nvmf_tls 00:59:39.033 ************************************ 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:59:39.033 * Looking for test storage... 00:59:39.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:59:39.033 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:59:39.301 Cannot find device "nvmf_tgt_br" 00:59:39.301 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:59:39.301 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:59:39.301 Cannot find device "nvmf_tgt_br2" 00:59:39.301 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:59:39.301 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:59:39.301 10:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:59:39.301 Cannot find device "nvmf_tgt_br" 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:59:39.301 Cannot find device "nvmf_tgt_br2" 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:59:39.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:59:39.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:59:39.301 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:59:39.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:39.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:59:39.560 00:59:39.560 --- 10.0.0.2 ping statistics --- 00:59:39.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:39.560 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:59:39.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:59:39.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:59:39.560 00:59:39.560 --- 10.0.0.3 ping statistics --- 00:59:39.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:39.560 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:59:39.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:39.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:59:39.560 00:59:39.560 --- 10.0.0.1 ping statistics --- 00:59:39.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:39.560 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99573 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99573 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99573 ']' 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:59:39.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:59:39.560 10:54:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:59:39.560 [2024-07-22 10:54:47.433679] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:59:39.560 [2024-07-22 10:54:47.433748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:39.818 [2024-07-22 10:54:47.555711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:59:39.818 [2024-07-22 10:54:47.578788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:39.818 [2024-07-22 10:54:47.618251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:39.818 [2024-07-22 10:54:47.618316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:39.818 [2024-07-22 10:54:47.618325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:39.818 [2024-07-22 10:54:47.618333] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:39.818 [2024-07-22 10:54:47.618340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:39.818 [2024-07-22 10:54:47.618386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:59:40.386 10:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:59:40.386 10:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:59:40.386 10:54:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:59:40.387 10:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:59:40.387 10:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:59:40.387 10:54:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:40.387 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:59:40.387 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:59:40.646 true 00:59:40.646 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:59:40.646 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:59:40.905 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:59:40.905 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:59:40.905 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:59:41.163 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:59:41.163 10:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:59:41.163 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:59:41.163 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:59:41.163 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:59:41.422 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:59:41.422 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:59:41.681 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:59:41.681 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:59:41.681 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:59:41.681 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:59:41.940 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:59:41.940 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:59:41.940 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:59:41.940 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:59:41.940 10:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:59:42.198 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:59:42.198 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:59:42.198 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:59:42.457 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:59:42.457 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.t6IOsvv165 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.czK1NvTr94 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.t6IOsvv165 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.czK1NvTr94 00:59:42.716 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:59:42.975 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:59:43.234 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.t6IOsvv165 00:59:43.234 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.t6IOsvv165 00:59:43.234 10:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:59:43.234 [2024-07-22 10:54:51.158310] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:43.492 10:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:59:43.492 10:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:59:43.750 [2024-07-22 10:54:51.525729] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:59:43.750 [2024-07-22 10:54:51.525921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:43.750 10:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:59:44.009 malloc0 00:59:44.009 10:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:59:44.009 10:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t6IOsvv165 00:59:44.267 [2024-07-22 10:54:52.105542] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:59:44.267 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.t6IOsvv165 00:59:56.470 Initializing NVMe Controllers 00:59:56.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:59:56.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:59:56.470 Initialization complete. Launching workers. 00:59:56.470 ======================================================== 00:59:56.470 Latency(us) 00:59:56.470 Device Information : IOPS MiB/s Average min max 00:59:56.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15626.38 61.04 4096.11 895.68 5597.34 00:59:56.470 ======================================================== 00:59:56.470 Total : 15626.38 61.04 4096.11 895.68 5597.34 00:59:56.470 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t6IOsvv165 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.t6IOsvv165' 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99917 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99917 /var/tmp/bdevperf.sock 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99917 ']' 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:59:56.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:59:56.470 10:55:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:59:56.470 [2024-07-22 10:55:02.354332] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:59:56.470 [2024-07-22 10:55:02.354403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99917 ] 00:59:56.470 [2024-07-22 10:55:02.473613] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:59:56.470 [2024-07-22 10:55:02.482551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:56.470 [2024-07-22 10:55:02.523122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:59:56.470 10:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:59:56.470 10:55:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:59:56.470 10:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t6IOsvv165 00:59:56.470 [2024-07-22 10:55:03.355576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:59:56.470 [2024-07-22 10:55:03.355658] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:59:56.470 TLSTESTn1 00:59:56.470 10:55:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:59:56.470 Running I/O for 10 seconds... 01:00:06.495 01:00:06.495 Latency(us) 01:00:06.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:06.495 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:00:06.495 Verification LBA range: start 0x0 length 0x2000 01:00:06.495 TLSTESTn1 : 10.01 5984.80 23.38 0.00 0.00 21355.49 3947.95 15475.97 01:00:06.495 =================================================================================================================== 01:00:06.495 Total : 5984.80 23.38 0.00 0.00 21355.49 3947.95 15475.97 01:00:06.495 0 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99917 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99917 ']' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99917 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99917 01:00:06.495 killing process with pid 99917 01:00:06.495 Received shutdown signal, test time was about 10.000000 seconds 01:00:06.495 01:00:06.495 Latency(us) 01:00:06.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:06.495 =================================================================================================================== 01:00:06.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99917' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99917 01:00:06.495 [2024-07-22 10:55:13.591804] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99917 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.czK1NvTr94 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.czK1NvTr94 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:00:06.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.czK1NvTr94 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.czK1NvTr94' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100069 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100069 /var/tmp/bdevperf.sock 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100069 ']' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:06.495 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:06.495 [2024-07-22 10:55:13.821187] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:06.495 [2024-07-22 10:55:13.821249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100069 ] 01:00:06.495 [2024-07-22 10:55:13.939371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:06.495 [2024-07-22 10:55:13.963042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:06.495 [2024-07-22 10:55:14.003297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:06.763 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:06.763 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:06.763 10:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.czK1NvTr94 01:00:07.022 [2024-07-22 10:55:14.831723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:07.022 [2024-07-22 10:55:14.831813] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:00:07.022 [2024-07-22 10:55:14.841084] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:00:07.022 [2024-07-22 10:55:14.841896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2122c10 (107): Transport endpoint is not connected 01:00:07.022 [2024-07-22 10:55:14.842885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2122c10 (9): Bad file descriptor 01:00:07.022 [2024-07-22 10:55:14.843881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:00:07.022 [2024-07-22 10:55:14.843904] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:00:07.022 [2024-07-22 10:55:14.843916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:00:07.022 request: 01:00:07.022 { 01:00:07.022 "method": "bdev_nvme_attach_controller", 01:00:07.022 "params": { 01:00:07.022 "name": "TLSTEST", 01:00:07.022 "trtype": "tcp", 01:00:07.022 "traddr": "10.0.0.2", 01:00:07.022 "adrfam": "ipv4", 01:00:07.022 "trsvcid": "4420", 01:00:07.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:07.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:07.022 "prchk_reftag": false, 01:00:07.022 "prchk_guard": false, 01:00:07.022 "hdgst": false, 01:00:07.022 "ddgst": false, 01:00:07.022 "psk": "/tmp/tmp.czK1NvTr94" 01:00:07.022 } 01:00:07.022 } 01:00:07.022 Got JSON-RPC error response 01:00:07.022 GoRPCClient: error on JSON-RPC call 01:00:07.022 2024/07/22 10:55:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.czK1NvTr94 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100069 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100069 ']' 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100069 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100069 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100069' 01:00:07.022 killing process with pid 100069 01:00:07.022 Received shutdown signal, test time was about 10.000000 seconds 01:00:07.022 01:00:07.022 Latency(us) 01:00:07.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:07.022 =================================================================================================================== 01:00:07.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100069 01:00:07.022 [2024-07-22 10:55:14.906642] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:07.022 10:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100069 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.t6IOsvv165 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.t6IOsvv165 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.t6IOsvv165 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.t6IOsvv165' 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100110 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100110 /var/tmp/bdevperf.sock 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100110 ']' 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:07.281 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:07.281 [2024-07-22 10:55:15.129008] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:07.281 [2024-07-22 10:55:15.129072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100110 ] 01:00:07.540 [2024-07-22 10:55:15.246744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:07.540 [2024-07-22 10:55:15.270961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:07.540 [2024-07-22 10:55:15.311209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:08.108 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:08.108 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:08.108 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.t6IOsvv165 01:00:08.367 [2024-07-22 10:55:16.139849] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:08.367 [2024-07-22 10:55:16.139940] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:00:08.367 [2024-07-22 10:55:16.149030] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:00:08.367 [2024-07-22 10:55:16.149068] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:00:08.367 [2024-07-22 10:55:16.149110] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:00:08.367 [2024-07-22 10:55:16.150045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bc10 (107): Transport endpoint is not connected 01:00:08.367 [2024-07-22 10:55:16.151033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bc10 (9): Bad file descriptor 01:00:08.367 [2024-07-22 10:55:16.152029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:00:08.367 [2024-07-22 10:55:16.152049] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:00:08.367 [2024-07-22 10:55:16.152061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:00:08.367 2024/07/22 10:55:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.t6IOsvv165 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:00:08.367 request: 01:00:08.367 { 01:00:08.367 "method": "bdev_nvme_attach_controller", 01:00:08.367 "params": { 01:00:08.367 "name": "TLSTEST", 01:00:08.367 "trtype": "tcp", 01:00:08.367 "traddr": "10.0.0.2", 01:00:08.367 "adrfam": "ipv4", 01:00:08.367 "trsvcid": "4420", 01:00:08.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:08.367 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:00:08.367 "prchk_reftag": false, 01:00:08.367 "prchk_guard": false, 01:00:08.367 "hdgst": false, 01:00:08.367 "ddgst": false, 01:00:08.367 "psk": "/tmp/tmp.t6IOsvv165" 01:00:08.367 } 01:00:08.367 } 01:00:08.367 Got JSON-RPC error response 01:00:08.367 GoRPCClient: error on JSON-RPC call 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100110 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100110 ']' 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100110 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100110 01:00:08.367 killing process with pid 100110 01:00:08.367 Received shutdown signal, test time was about 10.000000 seconds 01:00:08.367 01:00:08.367 Latency(us) 01:00:08.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:08.367 =================================================================================================================== 01:00:08.367 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100110' 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100110 01:00:08.367 [2024-07-22 10:55:16.213177] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:08.367 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100110 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.t6IOsvv165 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.t6IOsvv165 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.t6IOsvv165 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.t6IOsvv165' 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100150 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100150 /var/tmp/bdevperf.sock 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100150 ']' 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:08.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:08.627 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:08.627 [2024-07-22 10:55:16.435710] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:08.627 [2024-07-22 10:55:16.435776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100150 ] 01:00:08.627 [2024-07-22 10:55:16.553446] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:08.886 [2024-07-22 10:55:16.575889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:08.886 [2024-07-22 10:55:16.616284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:09.453 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:09.453 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:09.453 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t6IOsvv165 01:00:09.712 [2024-07-22 10:55:17.444940] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:09.712 [2024-07-22 10:55:17.445037] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:00:09.712 [2024-07-22 10:55:17.449347] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:00:09.712 [2024-07-22 10:55:17.449380] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:00:09.712 [2024-07-22 10:55:17.449423] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:00:09.712 [2024-07-22 10:55:17.450113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cac10 (107): Transport endpoint is not connected 01:00:09.712 [2024-07-22 10:55:17.451099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cac10 (9): Bad file descriptor 01:00:09.712 [2024-07-22 10:55:17.452096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 01:00:09.712 [2024-07-22 10:55:17.452116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:00:09.712 [2024-07-22 10:55:17.452128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 01:00:09.712 2024/07/22 10:55:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.t6IOsvv165 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:00:09.712 request: 01:00:09.712 { 01:00:09.712 "method": "bdev_nvme_attach_controller", 01:00:09.712 "params": { 01:00:09.712 "name": "TLSTEST", 01:00:09.712 "trtype": "tcp", 01:00:09.712 "traddr": "10.0.0.2", 01:00:09.712 "adrfam": "ipv4", 01:00:09.712 "trsvcid": "4420", 01:00:09.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:00:09.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:09.712 "prchk_reftag": false, 01:00:09.712 "prchk_guard": false, 01:00:09.712 "hdgst": false, 01:00:09.712 "ddgst": false, 01:00:09.712 "psk": "/tmp/tmp.t6IOsvv165" 01:00:09.712 } 01:00:09.712 } 01:00:09.712 Got JSON-RPC error response 01:00:09.712 GoRPCClient: error on JSON-RPC call 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100150 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100150 ']' 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100150 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100150 01:00:09.712 killing process with pid 100150 01:00:09.712 Received shutdown signal, test time was about 10.000000 seconds 01:00:09.712 01:00:09.712 Latency(us) 01:00:09.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:09.712 =================================================================================================================== 01:00:09.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100150' 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100150 01:00:09.712 [2024-07-22 10:55:17.507633] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:09.712 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100150 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100190 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100190 /var/tmp/bdevperf.sock 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100190 ']' 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:09.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:09.971 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:09.971 [2024-07-22 10:55:17.730730] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:09.971 [2024-07-22 10:55:17.730795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100190 ] 01:00:09.971 [2024-07-22 10:55:17.848460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:09.971 [2024-07-22 10:55:17.871060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:10.230 [2024-07-22 10:55:17.911520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:10.798 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:10.798 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:10.798 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:00:11.059 [2024-07-22 10:55:18.745118] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:00:11.059 [2024-07-22 10:55:18.746413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efbc0 (9): Bad file descriptor 01:00:11.059 [2024-07-22 10:55:18.747408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:00:11.059 [2024-07-22 10:55:18.747429] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:00:11.059 [2024-07-22 10:55:18.747441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:00:11.059 2024/07/22 10:55:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:00:11.059 request: 01:00:11.059 { 01:00:11.059 "method": "bdev_nvme_attach_controller", 01:00:11.059 "params": { 01:00:11.059 "name": "TLSTEST", 01:00:11.059 "trtype": "tcp", 01:00:11.059 "traddr": "10.0.0.2", 01:00:11.059 "adrfam": "ipv4", 01:00:11.059 "trsvcid": "4420", 01:00:11.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:11.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:11.059 "prchk_reftag": false, 01:00:11.059 "prchk_guard": false, 01:00:11.059 "hdgst": false, 01:00:11.059 "ddgst": false 01:00:11.059 } 01:00:11.059 } 01:00:11.059 Got JSON-RPC error response 01:00:11.059 GoRPCClient: error on JSON-RPC call 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100190 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100190 ']' 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100190 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100190 01:00:11.059 killing process with pid 100190 01:00:11.059 Received shutdown signal, test time was about 10.000000 seconds 01:00:11.059 01:00:11.059 Latency(us) 01:00:11.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:11.059 =================================================================================================================== 01:00:11.059 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100190' 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100190 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100190 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 99573 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99573 ']' 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99573 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:11.059 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99573 01:00:11.317 killing process with pid 99573 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99573' 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99573 01:00:11.317 [2024-07-22 10:55:19.008104] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99573 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.fbTvA9VFZS 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:00:11.317 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.fbTvA9VFZS 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100247 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100247 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:00:11.575 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100247 ']' 01:00:11.576 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:11.576 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:11.576 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:11.576 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:11.576 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:11.576 [2024-07-22 10:55:19.314500] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:11.576 [2024-07-22 10:55:19.314578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:11.576 [2024-07-22 10:55:19.432803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:11.576 [2024-07-22 10:55:19.457446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:11.576 [2024-07-22 10:55:19.498344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:11.576 [2024-07-22 10:55:19.498385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:11.576 [2024-07-22 10:55:19.498394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:11.576 [2024-07-22 10:55:19.498402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:11.576 [2024-07-22 10:55:19.498409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:11.576 [2024-07-22 10:55:19.498433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.fbTvA9VFZS 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fbTvA9VFZS 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:00:12.536 [2024-07-22 10:55:20.378674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:12.536 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:00:12.795 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:00:13.053 [2024-07-22 10:55:20.766100] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:13.053 [2024-07-22 10:55:20.766289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:13.053 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:00:13.053 malloc0 01:00:13.053 10:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:00:13.312 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:13.570 [2024-07-22 10:55:21.306049] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTvA9VFZS 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fbTvA9VFZS' 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100344 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100344 /var/tmp/bdevperf.sock 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100344 ']' 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:13.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:13.570 10:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:13.570 [2024-07-22 10:55:21.377381] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:13.570 [2024-07-22 10:55:21.377448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100344 ] 01:00:13.570 [2024-07-22 10:55:21.496841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:13.827 [2024-07-22 10:55:21.507284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:13.827 [2024-07-22 10:55:21.547031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:14.395 10:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:14.395 10:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:14.395 10:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:14.654 [2024-07-22 10:55:22.383497] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:14.654 [2024-07-22 10:55:22.383590] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:00:14.654 TLSTESTn1 01:00:14.654 10:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:00:14.654 Running I/O for 10 seconds... 01:00:26.858 01:00:26.858 Latency(us) 01:00:26.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:26.858 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:00:26.858 Verification LBA range: start 0x0 length 0x2000 01:00:26.858 TLSTESTn1 : 10.01 5841.64 22.82 0.00 0.00 21878.00 4632.26 18213.22 01:00:26.858 =================================================================================================================== 01:00:26.858 Total : 5841.64 22.82 0.00 0.00 21878.00 4632.26 18213.22 01:00:26.858 0 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100344 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100344 ']' 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100344 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100344 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:26.858 killing process with pid 100344 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100344' 01:00:26.858 Received shutdown signal, test time was about 10.000000 seconds 01:00:26.858 01:00:26.858 Latency(us) 01:00:26.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:26.858 =================================================================================================================== 01:00:26.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100344 01:00:26.858 [2024-07-22 10:55:32.626495] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100344 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.fbTvA9VFZS 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTvA9VFZS 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTvA9VFZS 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTvA9VFZS 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fbTvA9VFZS' 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100491 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100491 /var/tmp/bdevperf.sock 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100491 ']' 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:26.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:26.858 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:26.858 [2024-07-22 10:55:32.861979] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:26.858 [2024-07-22 10:55:32.862046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100491 ] 01:00:26.858 [2024-07-22 10:55:32.980926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:26.858 [2024-07-22 10:55:33.005435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:26.858 [2024-07-22 10:55:33.045874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:26.858 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:26.858 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:26.858 10:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:26.858 [2024-07-22 10:55:33.874255] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:26.858 [2024-07-22 10:55:33.874316] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 01:00:26.858 [2024-07-22 10:55:33.874325] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.fbTvA9VFZS 01:00:26.858 2024/07/22 10:55:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.fbTvA9VFZS subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 01:00:26.858 request: 01:00:26.858 { 01:00:26.858 "method": "bdev_nvme_attach_controller", 01:00:26.858 "params": { 01:00:26.858 "name": "TLSTEST", 01:00:26.858 "trtype": "tcp", 01:00:26.858 "traddr": "10.0.0.2", 01:00:26.858 "adrfam": "ipv4", 01:00:26.858 "trsvcid": "4420", 01:00:26.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:26.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:26.859 "prchk_reftag": false, 01:00:26.859 "prchk_guard": false, 01:00:26.859 "hdgst": false, 01:00:26.859 "ddgst": false, 01:00:26.859 "psk": "/tmp/tmp.fbTvA9VFZS" 01:00:26.859 } 01:00:26.859 } 01:00:26.859 Got JSON-RPC error response 01:00:26.859 GoRPCClient: error on JSON-RPC call 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100491 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100491 ']' 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100491 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100491 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:26.859 killing process with pid 100491 01:00:26.859 Received shutdown signal, test time was about 10.000000 seconds 01:00:26.859 01:00:26.859 Latency(us) 01:00:26.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:26.859 =================================================================================================================== 01:00:26.859 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100491' 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100491 01:00:26.859 10:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100491 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 100247 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100247 ']' 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100247 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100247 01:00:26.859 killing process with pid 100247 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100247' 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100247 01:00:26.859 [2024-07-22 10:55:34.134694] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100247 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100543 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:00:26.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100543 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100543 ']' 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:26.859 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:26.859 [2024-07-22 10:55:34.377127] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:26.859 [2024-07-22 10:55:34.377195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:26.859 [2024-07-22 10:55:34.495501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:26.859 [2024-07-22 10:55:34.517713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:26.859 [2024-07-22 10:55:34.556919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:26.859 [2024-07-22 10:55:34.556972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:26.859 [2024-07-22 10:55:34.556982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:26.859 [2024-07-22 10:55:34.556989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:26.859 [2024-07-22 10:55:34.556996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:26.859 [2024-07-22 10:55:34.557026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.fbTvA9VFZS 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fbTvA9VFZS 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.fbTvA9VFZS 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fbTvA9VFZS 01:00:27.427 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:00:27.685 [2024-07-22 10:55:35.452692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:27.685 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:00:27.944 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:00:27.944 [2024-07-22 10:55:35.840147] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:27.944 [2024-07-22 10:55:35.840353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:27.944 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:00:28.202 malloc0 01:00:28.202 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:00:28.461 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:28.720 [2024-07-22 10:55:36.428017] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 01:00:28.720 [2024-07-22 10:55:36.428051] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 01:00:28.720 [2024-07-22 10:55:36.428079] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:00:28.720 2024/07/22 10:55:36 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.fbTvA9VFZS], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 01:00:28.720 request: 01:00:28.720 { 01:00:28.720 "method": "nvmf_subsystem_add_host", 01:00:28.720 "params": { 01:00:28.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:28.720 "host": "nqn.2016-06.io.spdk:host1", 01:00:28.720 "psk": "/tmp/tmp.fbTvA9VFZS" 01:00:28.720 } 01:00:28.720 } 01:00:28.720 Got JSON-RPC error response 01:00:28.720 GoRPCClient: error on JSON-RPC call 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 100543 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100543 ']' 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100543 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100543 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:28.720 killing process with pid 100543 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100543' 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100543 01:00:28.720 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100543 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.fbTvA9VFZS 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100651 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100651 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100651 ']' 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:28.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:28.979 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:28.979 [2024-07-22 10:55:36.731749] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:28.979 [2024-07-22 10:55:36.731828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:28.979 [2024-07-22 10:55:36.849251] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:28.979 [2024-07-22 10:55:36.863561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:28.979 [2024-07-22 10:55:36.903535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:28.979 [2024-07-22 10:55:36.903579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:28.979 [2024-07-22 10:55:36.903588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:28.979 [2024-07-22 10:55:36.903595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:28.979 [2024-07-22 10:55:36.903602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:28.979 [2024-07-22 10:55:36.903625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:29.915 10:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:29.915 10:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:29.915 10:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:29.915 10:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:29.916 10:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:29.916 10:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:29.916 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.fbTvA9VFZS 01:00:29.916 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fbTvA9VFZS 01:00:29.916 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:00:29.916 [2024-07-22 10:55:37.807314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:29.916 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:00:30.175 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:00:30.432 [2024-07-22 10:55:38.190715] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:30.432 [2024-07-22 10:55:38.190895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:30.432 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:00:30.689 malloc0 01:00:30.689 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:00:30.689 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:30.947 [2024-07-22 10:55:38.746651] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=100744 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 100744 /var/tmp/bdevperf.sock 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100744 ']' 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:30.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:30.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:30.947 [2024-07-22 10:55:38.816098] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:30.947 [2024-07-22 10:55:38.816189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100744 ] 01:00:31.207 [2024-07-22 10:55:38.934377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:31.207 [2024-07-22 10:55:38.955242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:31.207 [2024-07-22 10:55:38.996521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:31.806 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:31.806 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:31.806 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:32.064 [2024-07-22 10:55:39.805402] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:32.064 [2024-07-22 10:55:39.805492] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:00:32.064 TLSTESTn1 01:00:32.064 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:00:32.339 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 01:00:32.339 "subsystems": [ 01:00:32.339 { 01:00:32.339 "subsystem": "keyring", 01:00:32.339 "config": [] 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "subsystem": "iobuf", 01:00:32.339 "config": [ 01:00:32.339 { 01:00:32.339 "method": "iobuf_set_options", 01:00:32.339 "params": { 01:00:32.339 "large_bufsize": 135168, 01:00:32.339 "large_pool_count": 1024, 01:00:32.339 "small_bufsize": 8192, 01:00:32.339 "small_pool_count": 8192 01:00:32.339 } 01:00:32.339 } 01:00:32.339 ] 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "subsystem": "sock", 01:00:32.339 "config": [ 01:00:32.339 { 01:00:32.339 "method": "sock_set_default_impl", 01:00:32.339 "params": { 01:00:32.339 "impl_name": "posix" 01:00:32.339 } 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "method": "sock_impl_set_options", 01:00:32.339 "params": { 01:00:32.339 "enable_ktls": false, 01:00:32.339 "enable_placement_id": 0, 01:00:32.339 "enable_quickack": false, 01:00:32.339 "enable_recv_pipe": true, 01:00:32.339 "enable_zerocopy_send_client": false, 01:00:32.339 "enable_zerocopy_send_server": true, 01:00:32.339 "impl_name": "ssl", 01:00:32.339 "recv_buf_size": 4096, 01:00:32.339 "send_buf_size": 4096, 01:00:32.339 "tls_version": 0, 01:00:32.339 "zerocopy_threshold": 0 01:00:32.339 } 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "method": "sock_impl_set_options", 01:00:32.339 "params": { 01:00:32.339 "enable_ktls": false, 01:00:32.339 "enable_placement_id": 0, 01:00:32.339 "enable_quickack": false, 01:00:32.339 "enable_recv_pipe": true, 01:00:32.339 "enable_zerocopy_send_client": false, 01:00:32.339 "enable_zerocopy_send_server": true, 01:00:32.339 "impl_name": "posix", 01:00:32.339 "recv_buf_size": 2097152, 01:00:32.339 "send_buf_size": 2097152, 01:00:32.339 "tls_version": 0, 01:00:32.339 "zerocopy_threshold": 0 01:00:32.339 } 01:00:32.339 } 01:00:32.339 ] 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "subsystem": "vmd", 01:00:32.339 "config": [] 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "subsystem": "accel", 01:00:32.339 "config": [ 01:00:32.339 { 01:00:32.339 "method": "accel_set_options", 01:00:32.339 "params": { 01:00:32.339 "buf_count": 2048, 01:00:32.339 "large_cache_size": 16, 01:00:32.339 "sequence_count": 2048, 01:00:32.339 "small_cache_size": 128, 01:00:32.339 "task_count": 2048 01:00:32.339 } 01:00:32.339 } 01:00:32.339 ] 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "subsystem": "bdev", 01:00:32.339 "config": [ 01:00:32.339 { 01:00:32.339 "method": "bdev_set_options", 01:00:32.339 "params": { 01:00:32.339 "bdev_auto_examine": true, 01:00:32.339 "bdev_io_cache_size": 256, 01:00:32.339 "bdev_io_pool_size": 65535, 01:00:32.339 "iobuf_large_cache_size": 16, 01:00:32.339 "iobuf_small_cache_size": 128 01:00:32.339 } 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "method": "bdev_raid_set_options", 01:00:32.339 "params": { 01:00:32.339 "process_max_bandwidth_mb_sec": 0, 01:00:32.339 "process_window_size_kb": 1024 01:00:32.339 } 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "method": "bdev_iscsi_set_options", 01:00:32.339 "params": { 01:00:32.339 "timeout_sec": 30 01:00:32.339 } 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "method": "bdev_nvme_set_options", 01:00:32.339 "params": { 01:00:32.339 "action_on_timeout": "none", 01:00:32.339 "allow_accel_sequence": false, 01:00:32.339 "arbitration_burst": 0, 01:00:32.339 "bdev_retry_count": 3, 01:00:32.339 "ctrlr_loss_timeout_sec": 0, 01:00:32.339 "delay_cmd_submit": true, 01:00:32.339 "dhchap_dhgroups": [ 01:00:32.339 "null", 01:00:32.339 "ffdhe2048", 01:00:32.339 "ffdhe3072", 01:00:32.339 "ffdhe4096", 01:00:32.339 "ffdhe6144", 01:00:32.339 "ffdhe8192" 01:00:32.339 ], 01:00:32.339 "dhchap_digests": [ 01:00:32.339 "sha256", 01:00:32.339 "sha384", 01:00:32.339 "sha512" 01:00:32.339 ], 01:00:32.339 "disable_auto_failback": false, 01:00:32.339 "fast_io_fail_timeout_sec": 0, 01:00:32.339 "generate_uuids": false, 01:00:32.339 "high_priority_weight": 0, 01:00:32.339 "io_path_stat": false, 01:00:32.339 "io_queue_requests": 0, 01:00:32.339 "keep_alive_timeout_ms": 10000, 01:00:32.339 "low_priority_weight": 0, 01:00:32.339 "medium_priority_weight": 0, 01:00:32.339 "nvme_adminq_poll_period_us": 10000, 01:00:32.339 "nvme_error_stat": false, 01:00:32.339 "nvme_ioq_poll_period_us": 0, 01:00:32.339 "rdma_cm_event_timeout_ms": 0, 01:00:32.339 "rdma_max_cq_size": 0, 01:00:32.339 "rdma_srq_size": 0, 01:00:32.339 "reconnect_delay_sec": 0, 01:00:32.339 "timeout_admin_us": 0, 01:00:32.339 "timeout_us": 0, 01:00:32.339 "transport_ack_timeout": 0, 01:00:32.339 "transport_retry_count": 4, 01:00:32.339 "transport_tos": 0 01:00:32.339 } 01:00:32.339 }, 01:00:32.339 { 01:00:32.339 "method": "bdev_nvme_set_hotplug", 01:00:32.339 "params": { 01:00:32.339 "enable": false, 01:00:32.339 "period_us": 100000 01:00:32.339 } 01:00:32.339 }, 01:00:32.340 { 01:00:32.340 "method": "bdev_malloc_create", 01:00:32.340 "params": { 01:00:32.340 "block_size": 4096, 01:00:32.340 "name": "malloc0", 01:00:32.340 "num_blocks": 8192, 01:00:32.340 "optimal_io_boundary": 0, 01:00:32.340 "physical_block_size": 4096, 01:00:32.340 "uuid": "2ac79b33-62cb-4bd3-8291-162b6ad76060" 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "bdev_wait_for_examine" 01:00:32.340 } 01:00:32.340 ] 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "subsystem": "nbd", 01:00:32.340 "config": [] 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "subsystem": "scheduler", 01:00:32.340 "config": [ 01:00:32.340 { 01:00:32.340 "method": "framework_set_scheduler", 01:00:32.340 "params": { 01:00:32.340 "name": "static" 01:00:32.340 } 01:00:32.340 } 01:00:32.340 ] 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "subsystem": "nvmf", 01:00:32.340 "config": [ 01:00:32.340 { 01:00:32.340 "method": "nvmf_set_config", 01:00:32.340 "params": { 01:00:32.340 "admin_cmd_passthru": { 01:00:32.340 "identify_ctrlr": false 01:00:32.340 }, 01:00:32.340 "discovery_filter": "match_any" 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_set_max_subsystems", 01:00:32.340 "params": { 01:00:32.340 "max_subsystems": 1024 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_set_crdt", 01:00:32.340 "params": { 01:00:32.340 "crdt1": 0, 01:00:32.340 "crdt2": 0, 01:00:32.340 "crdt3": 0 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_create_transport", 01:00:32.340 "params": { 01:00:32.340 "abort_timeout_sec": 1, 01:00:32.340 "ack_timeout": 0, 01:00:32.340 "buf_cache_size": 4294967295, 01:00:32.340 "c2h_success": false, 01:00:32.340 "data_wr_pool_size": 0, 01:00:32.340 "dif_insert_or_strip": false, 01:00:32.340 "in_capsule_data_size": 4096, 01:00:32.340 "io_unit_size": 131072, 01:00:32.340 "max_aq_depth": 128, 01:00:32.340 "max_io_qpairs_per_ctrlr": 127, 01:00:32.340 "max_io_size": 131072, 01:00:32.340 "max_queue_depth": 128, 01:00:32.340 "num_shared_buffers": 511, 01:00:32.340 "sock_priority": 0, 01:00:32.340 "trtype": "TCP", 01:00:32.340 "zcopy": false 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_create_subsystem", 01:00:32.340 "params": { 01:00:32.340 "allow_any_host": false, 01:00:32.340 "ana_reporting": false, 01:00:32.340 "max_cntlid": 65519, 01:00:32.340 "max_namespaces": 10, 01:00:32.340 "min_cntlid": 1, 01:00:32.340 "model_number": "SPDK bdev Controller", 01:00:32.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:32.340 "serial_number": "SPDK00000000000001" 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_subsystem_add_host", 01:00:32.340 "params": { 01:00:32.340 "host": "nqn.2016-06.io.spdk:host1", 01:00:32.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:32.340 "psk": "/tmp/tmp.fbTvA9VFZS" 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_subsystem_add_ns", 01:00:32.340 "params": { 01:00:32.340 "namespace": { 01:00:32.340 "bdev_name": "malloc0", 01:00:32.340 "nguid": "2AC79B3362CB4BD38291162B6AD76060", 01:00:32.340 "no_auto_visible": false, 01:00:32.340 "nsid": 1, 01:00:32.340 "uuid": "2ac79b33-62cb-4bd3-8291-162b6ad76060" 01:00:32.340 }, 01:00:32.340 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:00:32.340 } 01:00:32.340 }, 01:00:32.340 { 01:00:32.340 "method": "nvmf_subsystem_add_listener", 01:00:32.340 "params": { 01:00:32.340 "listen_address": { 01:00:32.340 "adrfam": "IPv4", 01:00:32.340 "traddr": "10.0.0.2", 01:00:32.340 "trsvcid": "4420", 01:00:32.340 "trtype": "TCP" 01:00:32.340 }, 01:00:32.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:32.340 "secure_channel": true 01:00:32.340 } 01:00:32.340 } 01:00:32.340 ] 01:00:32.340 } 01:00:32.340 ] 01:00:32.340 }' 01:00:32.340 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:00:32.599 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 01:00:32.599 "subsystems": [ 01:00:32.599 { 01:00:32.599 "subsystem": "keyring", 01:00:32.599 "config": [] 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "subsystem": "iobuf", 01:00:32.599 "config": [ 01:00:32.599 { 01:00:32.599 "method": "iobuf_set_options", 01:00:32.599 "params": { 01:00:32.599 "large_bufsize": 135168, 01:00:32.599 "large_pool_count": 1024, 01:00:32.599 "small_bufsize": 8192, 01:00:32.599 "small_pool_count": 8192 01:00:32.599 } 01:00:32.599 } 01:00:32.599 ] 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "subsystem": "sock", 01:00:32.599 "config": [ 01:00:32.599 { 01:00:32.599 "method": "sock_set_default_impl", 01:00:32.599 "params": { 01:00:32.599 "impl_name": "posix" 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "sock_impl_set_options", 01:00:32.599 "params": { 01:00:32.599 "enable_ktls": false, 01:00:32.599 "enable_placement_id": 0, 01:00:32.599 "enable_quickack": false, 01:00:32.599 "enable_recv_pipe": true, 01:00:32.599 "enable_zerocopy_send_client": false, 01:00:32.599 "enable_zerocopy_send_server": true, 01:00:32.599 "impl_name": "ssl", 01:00:32.599 "recv_buf_size": 4096, 01:00:32.599 "send_buf_size": 4096, 01:00:32.599 "tls_version": 0, 01:00:32.599 "zerocopy_threshold": 0 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "sock_impl_set_options", 01:00:32.599 "params": { 01:00:32.599 "enable_ktls": false, 01:00:32.599 "enable_placement_id": 0, 01:00:32.599 "enable_quickack": false, 01:00:32.599 "enable_recv_pipe": true, 01:00:32.599 "enable_zerocopy_send_client": false, 01:00:32.599 "enable_zerocopy_send_server": true, 01:00:32.599 "impl_name": "posix", 01:00:32.599 "recv_buf_size": 2097152, 01:00:32.599 "send_buf_size": 2097152, 01:00:32.599 "tls_version": 0, 01:00:32.599 "zerocopy_threshold": 0 01:00:32.599 } 01:00:32.599 } 01:00:32.599 ] 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "subsystem": "vmd", 01:00:32.599 "config": [] 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "subsystem": "accel", 01:00:32.599 "config": [ 01:00:32.599 { 01:00:32.599 "method": "accel_set_options", 01:00:32.599 "params": { 01:00:32.599 "buf_count": 2048, 01:00:32.599 "large_cache_size": 16, 01:00:32.599 "sequence_count": 2048, 01:00:32.599 "small_cache_size": 128, 01:00:32.599 "task_count": 2048 01:00:32.599 } 01:00:32.599 } 01:00:32.599 ] 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "subsystem": "bdev", 01:00:32.599 "config": [ 01:00:32.599 { 01:00:32.599 "method": "bdev_set_options", 01:00:32.599 "params": { 01:00:32.599 "bdev_auto_examine": true, 01:00:32.599 "bdev_io_cache_size": 256, 01:00:32.599 "bdev_io_pool_size": 65535, 01:00:32.599 "iobuf_large_cache_size": 16, 01:00:32.599 "iobuf_small_cache_size": 128 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "bdev_raid_set_options", 01:00:32.599 "params": { 01:00:32.599 "process_max_bandwidth_mb_sec": 0, 01:00:32.599 "process_window_size_kb": 1024 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "bdev_iscsi_set_options", 01:00:32.599 "params": { 01:00:32.599 "timeout_sec": 30 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "bdev_nvme_set_options", 01:00:32.599 "params": { 01:00:32.599 "action_on_timeout": "none", 01:00:32.599 "allow_accel_sequence": false, 01:00:32.599 "arbitration_burst": 0, 01:00:32.599 "bdev_retry_count": 3, 01:00:32.599 "ctrlr_loss_timeout_sec": 0, 01:00:32.599 "delay_cmd_submit": true, 01:00:32.599 "dhchap_dhgroups": [ 01:00:32.599 "null", 01:00:32.599 "ffdhe2048", 01:00:32.599 "ffdhe3072", 01:00:32.599 "ffdhe4096", 01:00:32.599 "ffdhe6144", 01:00:32.599 "ffdhe8192" 01:00:32.599 ], 01:00:32.599 "dhchap_digests": [ 01:00:32.599 "sha256", 01:00:32.599 "sha384", 01:00:32.599 "sha512" 01:00:32.599 ], 01:00:32.599 "disable_auto_failback": false, 01:00:32.599 "fast_io_fail_timeout_sec": 0, 01:00:32.599 "generate_uuids": false, 01:00:32.599 "high_priority_weight": 0, 01:00:32.599 "io_path_stat": false, 01:00:32.599 "io_queue_requests": 512, 01:00:32.599 "keep_alive_timeout_ms": 10000, 01:00:32.599 "low_priority_weight": 0, 01:00:32.599 "medium_priority_weight": 0, 01:00:32.599 "nvme_adminq_poll_period_us": 10000, 01:00:32.599 "nvme_error_stat": false, 01:00:32.599 "nvme_ioq_poll_period_us": 0, 01:00:32.599 "rdma_cm_event_timeout_ms": 0, 01:00:32.599 "rdma_max_cq_size": 0, 01:00:32.599 "rdma_srq_size": 0, 01:00:32.599 "reconnect_delay_sec": 0, 01:00:32.599 "timeout_admin_us": 0, 01:00:32.599 "timeout_us": 0, 01:00:32.599 "transport_ack_timeout": 0, 01:00:32.599 "transport_retry_count": 4, 01:00:32.599 "transport_tos": 0 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "bdev_nvme_attach_controller", 01:00:32.599 "params": { 01:00:32.599 "adrfam": "IPv4", 01:00:32.599 "ctrlr_loss_timeout_sec": 0, 01:00:32.599 "ddgst": false, 01:00:32.599 "fast_io_fail_timeout_sec": 0, 01:00:32.599 "hdgst": false, 01:00:32.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:32.599 "name": "TLSTEST", 01:00:32.599 "prchk_guard": false, 01:00:32.599 "prchk_reftag": false, 01:00:32.599 "psk": "/tmp/tmp.fbTvA9VFZS", 01:00:32.599 "reconnect_delay_sec": 0, 01:00:32.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:32.599 "traddr": "10.0.0.2", 01:00:32.599 "trsvcid": "4420", 01:00:32.599 "trtype": "TCP" 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "bdev_nvme_set_hotplug", 01:00:32.599 "params": { 01:00:32.599 "enable": false, 01:00:32.599 "period_us": 100000 01:00:32.599 } 01:00:32.599 }, 01:00:32.599 { 01:00:32.599 "method": "bdev_wait_for_examine" 01:00:32.599 } 01:00:32.600 ] 01:00:32.600 }, 01:00:32.600 { 01:00:32.600 "subsystem": "nbd", 01:00:32.600 "config": [] 01:00:32.600 } 01:00:32.600 ] 01:00:32.600 }' 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 100744 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100744 ']' 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100744 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100744 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:32.600 killing process with pid 100744 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100744' 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100744 01:00:32.600 Received shutdown signal, test time was about 10.000000 seconds 01:00:32.600 01:00:32.600 Latency(us) 01:00:32.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:32.600 =================================================================================================================== 01:00:32.600 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:00:32.600 [2024-07-22 10:55:40.484616] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:32.600 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100744 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 100651 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100651 ']' 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100651 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100651 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:32.858 killing process with pid 100651 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100651' 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100651 01:00:32.858 [2024-07-22 10:55:40.697621] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:00:32.858 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100651 01:00:33.117 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:00:33.117 10:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:33.117 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:33.117 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 01:00:33.117 "subsystems": [ 01:00:33.117 { 01:00:33.117 "subsystem": "keyring", 01:00:33.117 "config": [] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "iobuf", 01:00:33.117 "config": [ 01:00:33.117 { 01:00:33.117 "method": "iobuf_set_options", 01:00:33.117 "params": { 01:00:33.117 "large_bufsize": 135168, 01:00:33.117 "large_pool_count": 1024, 01:00:33.117 "small_bufsize": 8192, 01:00:33.117 "small_pool_count": 8192 01:00:33.117 } 01:00:33.117 } 01:00:33.117 ] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "sock", 01:00:33.117 "config": [ 01:00:33.117 { 01:00:33.117 "method": "sock_set_default_impl", 01:00:33.117 "params": { 01:00:33.117 "impl_name": "posix" 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "sock_impl_set_options", 01:00:33.117 "params": { 01:00:33.117 "enable_ktls": false, 01:00:33.117 "enable_placement_id": 0, 01:00:33.117 "enable_quickack": false, 01:00:33.117 "enable_recv_pipe": true, 01:00:33.117 "enable_zerocopy_send_client": false, 01:00:33.117 "enable_zerocopy_send_server": true, 01:00:33.117 "impl_name": "ssl", 01:00:33.117 "recv_buf_size": 4096, 01:00:33.117 "send_buf_size": 4096, 01:00:33.117 "tls_version": 0, 01:00:33.117 "zerocopy_threshold": 0 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "sock_impl_set_options", 01:00:33.117 "params": { 01:00:33.117 "enable_ktls": false, 01:00:33.117 "enable_placement_id": 0, 01:00:33.117 "enable_quickack": false, 01:00:33.117 "enable_recv_pipe": true, 01:00:33.117 "enable_zerocopy_send_client": false, 01:00:33.117 "enable_zerocopy_send_server": true, 01:00:33.117 "impl_name": "posix", 01:00:33.117 "recv_buf_size": 2097152, 01:00:33.117 "send_buf_size": 2097152, 01:00:33.117 "tls_version": 0, 01:00:33.117 "zerocopy_threshold": 0 01:00:33.117 } 01:00:33.117 } 01:00:33.117 ] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "vmd", 01:00:33.117 "config": [] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "accel", 01:00:33.117 "config": [ 01:00:33.117 { 01:00:33.117 "method": "accel_set_options", 01:00:33.117 "params": { 01:00:33.117 "buf_count": 2048, 01:00:33.117 "large_cache_size": 16, 01:00:33.117 "sequence_count": 2048, 01:00:33.117 "small_cache_size": 128, 01:00:33.117 "task_count": 2048 01:00:33.117 } 01:00:33.117 } 01:00:33.117 ] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "bdev", 01:00:33.117 "config": [ 01:00:33.117 { 01:00:33.117 "method": "bdev_set_options", 01:00:33.117 "params": { 01:00:33.117 "bdev_auto_examine": true, 01:00:33.117 "bdev_io_cache_size": 256, 01:00:33.117 "bdev_io_pool_size": 65535, 01:00:33.117 "iobuf_large_cache_size": 16, 01:00:33.117 "iobuf_small_cache_size": 128 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "bdev_raid_set_options", 01:00:33.117 "params": { 01:00:33.117 "process_max_bandwidth_mb_sec": 0, 01:00:33.117 "process_window_size_kb": 1024 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "bdev_iscsi_set_options", 01:00:33.117 "params": { 01:00:33.117 "timeout_sec": 30 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "bdev_nvme_set_options", 01:00:33.117 "params": { 01:00:33.117 "action_on_timeout": "none", 01:00:33.117 "allow_accel_sequence": false, 01:00:33.117 "arbitration_burst": 0, 01:00:33.117 "bdev_retry_count": 3, 01:00:33.117 "ctrlr_loss_timeout_sec": 0, 01:00:33.117 "delay_cmd_submit": true, 01:00:33.117 "dhchap_dhgroups": [ 01:00:33.117 "null", 01:00:33.117 "ffdhe2048", 01:00:33.117 "ffdhe3072", 01:00:33.117 "ffdhe4096", 01:00:33.117 "ffdhe6144", 01:00:33.117 "ffdhe8192" 01:00:33.117 ], 01:00:33.117 "dhchap_digests": [ 01:00:33.117 "sha256", 01:00:33.117 "sha384", 01:00:33.117 "sha512" 01:00:33.117 ], 01:00:33.117 "disable_auto_failback": false, 01:00:33.117 "fast_io_fail_timeout_sec": 0, 01:00:33.117 "generate_uuids": false, 01:00:33.117 "high_priority_weight": 0, 01:00:33.117 "io_path_stat": false, 01:00:33.117 "io_queue_requests": 0, 01:00:33.117 "keep_alive_timeout_ms": 10000, 01:00:33.117 "low_priority_weight": 0, 01:00:33.117 "medium_priority_weight": 0, 01:00:33.117 "nvme_adminq_poll_period_us": 10000, 01:00:33.117 "nvme_error_stat": false, 01:00:33.117 "nvme_ioq_poll_period_us": 0, 01:00:33.117 "rdma_cm_event_timeout_ms": 0, 01:00:33.117 "rdma_max_cq_size": 0, 01:00:33.117 "rdma_srq_size": 0, 01:00:33.117 "reconnect_delay_sec": 0, 01:00:33.117 "timeout_admin_us": 0, 01:00:33.117 "timeout_us": 0, 01:00:33.117 "tran 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:33.117 sport_ack_timeout": 0, 01:00:33.117 "transport_retry_count": 4, 01:00:33.117 "transport_tos": 0 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "bdev_nvme_set_hotplug", 01:00:33.117 "params": { 01:00:33.117 "enable": false, 01:00:33.117 "period_us": 100000 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "bdev_malloc_create", 01:00:33.117 "params": { 01:00:33.117 "block_size": 4096, 01:00:33.117 "name": "malloc0", 01:00:33.117 "num_blocks": 8192, 01:00:33.117 "optimal_io_boundary": 0, 01:00:33.117 "physical_block_size": 4096, 01:00:33.117 "uuid": "2ac79b33-62cb-4bd3-8291-162b6ad76060" 01:00:33.117 } 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "method": "bdev_wait_for_examine" 01:00:33.117 } 01:00:33.117 ] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "nbd", 01:00:33.117 "config": [] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "scheduler", 01:00:33.117 "config": [ 01:00:33.117 { 01:00:33.117 "method": "framework_set_scheduler", 01:00:33.117 "params": { 01:00:33.117 "name": "static" 01:00:33.117 } 01:00:33.117 } 01:00:33.117 ] 01:00:33.117 }, 01:00:33.117 { 01:00:33.117 "subsystem": "nvmf", 01:00:33.117 "config": [ 01:00:33.117 { 01:00:33.117 "method": "nvmf_set_config", 01:00:33.117 "params": { 01:00:33.117 "admin_cmd_passthru": { 01:00:33.118 "identify_ctrlr": false 01:00:33.118 }, 01:00:33.118 "discovery_filter": "match_any" 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_set_max_subsystems", 01:00:33.118 "params": { 01:00:33.118 "max_subsystems": 1024 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_set_crdt", 01:00:33.118 "params": { 01:00:33.118 "crdt1": 0, 01:00:33.118 "crdt2": 0, 01:00:33.118 "crdt3": 0 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_create_transport", 01:00:33.118 "params": { 01:00:33.118 "abort_timeout_sec": 1, 01:00:33.118 "ack_timeout": 0, 01:00:33.118 "buf_cache_size": 4294967295, 01:00:33.118 "c2h_success": false, 01:00:33.118 "data_wr_pool_size": 0, 01:00:33.118 "dif_insert_or_strip": false, 01:00:33.118 "in_capsule_data_size": 4096, 01:00:33.118 "io_unit_size": 131072, 01:00:33.118 "max_aq_depth": 128, 01:00:33.118 "max_io_qpairs_per_ctrlr": 127, 01:00:33.118 "max_io_size": 131072, 01:00:33.118 "max_queue_depth": 128, 01:00:33.118 "num_shared_buffers": 511, 01:00:33.118 "sock_priority": 0, 01:00:33.118 "trtype": "TCP", 01:00:33.118 "zcopy": false 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_create_subsystem", 01:00:33.118 "params": { 01:00:33.118 "allow_any_host": false, 01:00:33.118 "ana_reporting": false, 01:00:33.118 "max_cntlid": 65519, 01:00:33.118 "max_namespaces": 10, 01:00:33.118 "min_cntlid": 1, 01:00:33.118 "model_number": "SPDK bdev Controller", 01:00:33.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:33.118 "serial_number": "SPDK00000000000001" 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_subsystem_add_host", 01:00:33.118 "params": { 01:00:33.118 "host": "nqn.2016-06.io.spdk:host1", 01:00:33.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:33.118 "psk": "/tmp/tmp.fbTvA9VFZS" 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_subsystem_add_ns", 01:00:33.118 "params": { 01:00:33.118 "namespace": { 01:00:33.118 "bdev_name": "malloc0", 01:00:33.118 "nguid": "2AC79B3362CB4BD38291162B6AD76060", 01:00:33.118 "no_auto_visible": false, 01:00:33.118 "nsid": 1, 01:00:33.118 "uuid": "2ac79b33-62cb-4bd3-8291-162b6ad76060" 01:00:33.118 }, 01:00:33.118 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:00:33.118 } 01:00:33.118 }, 01:00:33.118 { 01:00:33.118 "method": "nvmf_subsystem_add_listener", 01:00:33.118 "params": { 01:00:33.118 "listen_address": { 01:00:33.118 "adrfam": "IPv4", 01:00:33.118 "traddr": "10.0.0.2", 01:00:33.118 "trsvcid": "4420", 01:00:33.118 "trtype": "TCP" 01:00:33.118 }, 01:00:33.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:33.118 "secure_channel": true 01:00:33.118 } 01:00:33.118 } 01:00:33.118 ] 01:00:33.118 } 01:00:33.118 ] 01:00:33.118 }' 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100817 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100817 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100817 ']' 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:33.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:33.118 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:33.118 [2024-07-22 10:55:40.932401] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:33.118 [2024-07-22 10:55:40.932463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:33.376 [2024-07-22 10:55:41.050440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:33.376 [2024-07-22 10:55:41.072817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:33.376 [2024-07-22 10:55:41.112406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:33.376 [2024-07-22 10:55:41.112460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:33.376 [2024-07-22 10:55:41.112469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:33.376 [2024-07-22 10:55:41.112477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:33.376 [2024-07-22 10:55:41.112500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:33.376 [2024-07-22 10:55:41.112572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:33.635 [2024-07-22 10:55:41.311602] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:33.635 [2024-07-22 10:55:41.327531] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:00:33.635 [2024-07-22 10:55:41.343501] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:33.635 [2024-07-22 10:55:41.343676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:33.894 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:33.894 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:33.894 10:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:33.894 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:33.894 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:34.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=100861 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 100861 /var/tmp/bdevperf.sock 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100861 ']' 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:34.153 10:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 01:00:34.153 "subsystems": [ 01:00:34.153 { 01:00:34.153 "subsystem": "keyring", 01:00:34.153 "config": [] 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "subsystem": "iobuf", 01:00:34.153 "config": [ 01:00:34.153 { 01:00:34.153 "method": "iobuf_set_options", 01:00:34.153 "params": { 01:00:34.153 "large_bufsize": 135168, 01:00:34.153 "large_pool_count": 1024, 01:00:34.153 "small_bufsize": 8192, 01:00:34.153 "small_pool_count": 8192 01:00:34.153 } 01:00:34.153 } 01:00:34.153 ] 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "subsystem": "sock", 01:00:34.153 "config": [ 01:00:34.153 { 01:00:34.153 "method": "sock_set_default_impl", 01:00:34.153 "params": { 01:00:34.153 "impl_name": "posix" 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "sock_impl_set_options", 01:00:34.153 "params": { 01:00:34.153 "enable_ktls": false, 01:00:34.153 "enable_placement_id": 0, 01:00:34.153 "enable_quickack": false, 01:00:34.153 "enable_recv_pipe": true, 01:00:34.153 "enable_zerocopy_send_client": false, 01:00:34.153 "enable_zerocopy_send_server": true, 01:00:34.153 "impl_name": "ssl", 01:00:34.153 "recv_buf_size": 4096, 01:00:34.153 "send_buf_size": 4096, 01:00:34.153 "tls_version": 0, 01:00:34.153 "zerocopy_threshold": 0 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "sock_impl_set_options", 01:00:34.153 "params": { 01:00:34.153 "enable_ktls": false, 01:00:34.153 "enable_placement_id": 0, 01:00:34.153 "enable_quickack": false, 01:00:34.153 "enable_recv_pipe": true, 01:00:34.153 "enable_zerocopy_send_client": false, 01:00:34.153 "enable_zerocopy_send_server": true, 01:00:34.153 "impl_name": "posix", 01:00:34.153 "recv_buf_size": 2097152, 01:00:34.153 "send_buf_size": 2097152, 01:00:34.153 "tls_version": 0, 01:00:34.153 "zerocopy_threshold": 0 01:00:34.153 } 01:00:34.153 } 01:00:34.153 ] 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "subsystem": "vmd", 01:00:34.153 "config": [] 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "subsystem": "accel", 01:00:34.153 "config": [ 01:00:34.153 { 01:00:34.153 "method": "accel_set_options", 01:00:34.153 "params": { 01:00:34.153 "buf_count": 2048, 01:00:34.153 "large_cache_size": 16, 01:00:34.153 "sequence_count": 2048, 01:00:34.153 "small_cache_size": 128, 01:00:34.153 "task_count": 2048 01:00:34.153 } 01:00:34.153 } 01:00:34.153 ] 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "subsystem": "bdev", 01:00:34.153 "config": [ 01:00:34.153 { 01:00:34.153 "method": "bdev_set_options", 01:00:34.153 "params": { 01:00:34.153 "bdev_auto_examine": true, 01:00:34.153 "bdev_io_cache_size": 256, 01:00:34.153 "bdev_io_pool_size": 65535, 01:00:34.153 "iobuf_large_cache_size": 16, 01:00:34.153 "iobuf_small_cache_size": 128 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "bdev_raid_set_options", 01:00:34.153 "params": { 01:00:34.153 "process_max_bandwidth_mb_sec": 0, 01:00:34.153 "process_window_size_kb": 1024 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "bdev_iscsi_set_options", 01:00:34.153 "params": { 01:00:34.153 "timeout_sec": 30 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "bdev_nvme_set_options", 01:00:34.153 "params": { 01:00:34.153 "action_on_timeout": "none", 01:00:34.153 "allow_accel_sequence": false, 01:00:34.153 "arbitration_burst": 0, 01:00:34.153 "bdev_retry_count": 3, 01:00:34.153 "ctrlr_loss_timeout_sec": 0, 01:00:34.153 "delay_cmd_submit": true, 01:00:34.153 "dhchap_dhgroups": [ 01:00:34.153 "null", 01:00:34.153 "ffdhe2048", 01:00:34.153 "ffdhe3072", 01:00:34.153 "ffdhe4096", 01:00:34.153 "ffdhe6144", 01:00:34.153 "ffdhe8192" 01:00:34.153 ], 01:00:34.153 "dhchap_digests": [ 01:00:34.153 "sha256", 01:00:34.153 "sha384", 01:00:34.153 "sha512" 01:00:34.153 ], 01:00:34.153 "disable_auto_failback": false, 01:00:34.153 "fast_io_fail_timeout_sec": 0, 01:00:34.153 "generate_uuids": false, 01:00:34.153 "high_priority_weight": 0, 01:00:34.153 "io_path_stat": false, 01:00:34.153 "io_queue_requests": 512, 01:00:34.153 "keep_alive_timeout_ms": 10000, 01:00:34.153 "low_priority_weight": 0, 01:00:34.153 "medium_priority_weight": 0, 01:00:34.153 "nvme_adminq_poll_period_us": 10000, 01:00:34.153 "nvme_error_stat": false, 01:00:34.153 "nvme_ioq_poll_period_us": 0, 01:00:34.153 "rdma_cm_event_timeout_ms": 0, 01:00:34.153 "rdma_max_cq_size": 0, 01:00:34.153 "rdma_srq_size": 0, 01:00:34.153 "reconnect_delay_sec": 0, 01:00:34.153 "timeout_admin_us": 0, 01:00:34.153 "timeout_us": 0, 01:00:34.153 "transport_ack_timeout": 0, 01:00:34.153 "transport_retry_count": 4, 01:00:34.153 "transport_tos": 0 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "bdev_nvme_attach_controller", 01:00:34.153 "params": { 01:00:34.153 "adrfam": "IPv4", 01:00:34.153 "ctrlr_loss_timeout_sec": 0, 01:00:34.153 "ddgst": false, 01:00:34.153 "fast_io_fail_timeout_sec": 0, 01:00:34.153 "hdgst": false, 01:00:34.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:34.153 "name": "TLSTEST", 01:00:34.153 "prchk_guard": false, 01:00:34.153 "prchk_reftag": false, 01:00:34.153 "psk": "/tmp/tmp.fbTvA9VFZS", 01:00:34.153 "reconnect_delay_sec": 0, 01:00:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:34.153 "traddr": "10.0.0.2", 01:00:34.153 "trsvcid": "4420", 01:00:34.153 "trtype": "TCP" 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "bdev_nvme_set_hotplug", 01:00:34.153 "params": { 01:00:34.153 "enable": false, 01:00:34.153 "period_us": 100000 01:00:34.153 } 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "method": "bdev_wait_for_examine" 01:00:34.153 } 01:00:34.153 ] 01:00:34.153 }, 01:00:34.153 { 01:00:34.153 "subsystem": "nbd", 01:00:34.153 "config": [] 01:00:34.153 } 01:00:34.153 ] 01:00:34.153 }' 01:00:34.153 [2024-07-22 10:55:41.878935] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:34.153 [2024-07-22 10:55:41.879134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100861 ] 01:00:34.153 [2024-07-22 10:55:41.996547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:34.153 [2024-07-22 10:55:42.014933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:34.153 [2024-07-22 10:55:42.055784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:34.412 [2024-07-22 10:55:42.195647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:34.412 [2024-07-22 10:55:42.195986] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:00:34.980 10:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:34.980 10:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:34.980 10:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:00:34.980 Running I/O for 10 seconds... 01:00:44.962 01:00:44.963 Latency(us) 01:00:44.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:44.963 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:00:44.963 Verification LBA range: start 0x0 length 0x2000 01:00:44.963 TLSTESTn1 : 10.01 5981.24 23.36 0.00 0.00 21367.10 4605.94 15791.81 01:00:44.963 =================================================================================================================== 01:00:44.963 Total : 5981.24 23.36 0.00 0.00 21367.10 4605.94 15791.81 01:00:44.963 0 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 100861 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100861 ']' 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100861 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100861 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:44.963 killing process with pid 100861 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100861' 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100861 01:00:44.963 Received shutdown signal, test time was about 10.000000 seconds 01:00:44.963 01:00:44.963 Latency(us) 01:00:44.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:44.963 =================================================================================================================== 01:00:44.963 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:44.963 [2024-07-22 10:55:52.866951] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:00:44.963 10:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100861 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 100817 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100817 ']' 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100817 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100817 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:45.220 killing process with pid 100817 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100817' 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100817 01:00:45.220 [2024-07-22 10:55:53.078896] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:00:45.220 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100817 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101006 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101006 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101006 ']' 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:45.476 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:45.477 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:45.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:45.477 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:45.477 10:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:45.477 [2024-07-22 10:55:53.320365] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:45.477 [2024-07-22 10:55:53.320426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:45.733 [2024-07-22 10:55:53.438356] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:45.733 [2024-07-22 10:55:53.462071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:45.733 [2024-07-22 10:55:53.500749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:45.733 [2024-07-22 10:55:53.500792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:45.733 [2024-07-22 10:55:53.500801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:45.733 [2024-07-22 10:55:53.500824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:45.733 [2024-07-22 10:55:53.500830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:45.733 [2024-07-22 10:55:53.500853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.fbTvA9VFZS 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fbTvA9VFZS 01:00:46.298 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:00:46.557 [2024-07-22 10:55:54.375589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:46.557 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:00:46.816 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:00:46.816 [2024-07-22 10:55:54.739053] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:46.816 [2024-07-22 10:55:54.739230] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:47.074 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:00:47.074 malloc0 01:00:47.074 10:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:00:47.332 10:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fbTvA9VFZS 01:00:47.590 [2024-07-22 10:55:55.298918] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=101103 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 101103 /var/tmp/bdevperf.sock 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101103 ']' 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:47.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:47.590 10:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:47.590 [2024-07-22 10:55:55.352232] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:47.590 [2024-07-22 10:55:55.352302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101103 ] 01:00:47.590 [2024-07-22 10:55:55.470838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:47.590 [2024-07-22 10:55:55.479715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:47.590 [2024-07-22 10:55:55.520123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:48.525 10:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:48.525 10:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:48.525 10:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbTvA9VFZS 01:00:48.525 10:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:00:48.783 [2024-07-22 10:55:56.583723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:48.783 nvme0n1 01:00:48.783 10:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:00:49.042 Running I/O for 1 seconds... 01:00:49.978 01:00:49.978 Latency(us) 01:00:49.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:49.978 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:00:49.978 Verification LBA range: start 0x0 length 0x2000 01:00:49.978 nvme0n1 : 1.01 6017.87 23.51 0.00 0.00 21114.57 4237.47 16528.76 01:00:49.978 =================================================================================================================== 01:00:49.978 Total : 6017.87 23.51 0.00 0.00 21114.57 4237.47 16528.76 01:00:49.978 0 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 101103 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101103 ']' 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101103 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101103 01:00:49.978 killing process with pid 101103 01:00:49.978 Received shutdown signal, test time was about 1.000000 seconds 01:00:49.978 01:00:49.978 Latency(us) 01:00:49.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:49.978 =================================================================================================================== 01:00:49.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101103' 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101103 01:00:49.978 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101103 01:00:50.237 10:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 101006 01:00:50.237 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101006 ']' 01:00:50.237 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101006 01:00:50.237 10:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101006 01:00:50.237 killing process with pid 101006 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101006' 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101006 01:00:50.237 [2024-07-22 10:55:58.036056] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:00:50.237 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101006 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101173 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101173 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101173 ']' 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:50.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:50.495 10:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:50.495 [2024-07-22 10:55:58.278518] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:50.496 [2024-07-22 10:55:58.278583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:50.496 [2024-07-22 10:55:58.396429] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:50.496 [2024-07-22 10:55:58.417301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:50.754 [2024-07-22 10:55:58.457163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:50.754 [2024-07-22 10:55:58.457207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:50.754 [2024-07-22 10:55:58.457216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:50.754 [2024-07-22 10:55:58.457225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:50.754 [2024-07-22 10:55:58.457231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:50.754 [2024-07-22 10:55:58.457259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:51.323 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:51.323 [2024-07-22 10:55:59.181941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:51.323 malloc0 01:00:51.323 [2024-07-22 10:55:59.210647] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:51.323 [2024-07-22 10:55:59.210822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=101223 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 101223 /var/tmp/bdevperf.sock 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101223 ']' 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:51.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:51.324 10:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:51.582 [2024-07-22 10:55:59.292297] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:51.582 [2024-07-22 10:55:59.292358] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101223 ] 01:00:51.582 [2024-07-22 10:55:59.410159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:51.582 [2024-07-22 10:55:59.432399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:51.582 [2024-07-22 10:55:59.472303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:52.553 10:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:52.553 10:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:52.553 10:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbTvA9VFZS 01:00:52.553 10:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:00:52.813 [2024-07-22 10:56:00.507594] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:52.813 nvme0n1 01:00:52.813 10:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:00:52.813 Running I/O for 1 seconds... 01:00:54.190 01:00:54.190 Latency(us) 01:00:54.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:54.190 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:00:54.190 Verification LBA range: start 0x0 length 0x2000 01:00:54.190 nvme0n1 : 1.01 6028.16 23.55 0.00 0.00 21087.21 4448.03 16739.32 01:00:54.190 =================================================================================================================== 01:00:54.190 Total : 6028.16 23.55 0.00 0.00 21087.21 4448.03 16739.32 01:00:54.190 0 01:00:54.190 10:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 01:00:54.190 10:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:54.190 10:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:54.190 10:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:54.190 10:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 01:00:54.190 "subsystems": [ 01:00:54.190 { 01:00:54.190 "subsystem": "keyring", 01:00:54.190 "config": [ 01:00:54.190 { 01:00:54.190 "method": "keyring_file_add_key", 01:00:54.190 "params": { 01:00:54.190 "name": "key0", 01:00:54.190 "path": "/tmp/tmp.fbTvA9VFZS" 01:00:54.190 } 01:00:54.190 } 01:00:54.190 ] 01:00:54.190 }, 01:00:54.190 { 01:00:54.190 "subsystem": "iobuf", 01:00:54.190 "config": [ 01:00:54.190 { 01:00:54.190 "method": "iobuf_set_options", 01:00:54.190 "params": { 01:00:54.190 "large_bufsize": 135168, 01:00:54.190 "large_pool_count": 1024, 01:00:54.190 "small_bufsize": 8192, 01:00:54.190 "small_pool_count": 8192 01:00:54.190 } 01:00:54.190 } 01:00:54.190 ] 01:00:54.190 }, 01:00:54.190 { 01:00:54.190 "subsystem": "sock", 01:00:54.190 "config": [ 01:00:54.190 { 01:00:54.190 "method": "sock_set_default_impl", 01:00:54.190 "params": { 01:00:54.190 "impl_name": "posix" 01:00:54.190 } 01:00:54.190 }, 01:00:54.190 { 01:00:54.190 "method": "sock_impl_set_options", 01:00:54.190 "params": { 01:00:54.190 "enable_ktls": false, 01:00:54.190 "enable_placement_id": 0, 01:00:54.190 "enable_quickack": false, 01:00:54.190 "enable_recv_pipe": true, 01:00:54.190 "enable_zerocopy_send_client": false, 01:00:54.190 "enable_zerocopy_send_server": true, 01:00:54.190 "impl_name": "ssl", 01:00:54.190 "recv_buf_size": 4096, 01:00:54.190 "send_buf_size": 4096, 01:00:54.190 "tls_version": 0, 01:00:54.190 "zerocopy_threshold": 0 01:00:54.190 } 01:00:54.190 }, 01:00:54.190 { 01:00:54.190 "method": "sock_impl_set_options", 01:00:54.190 "params": { 01:00:54.190 "enable_ktls": false, 01:00:54.190 "enable_placement_id": 0, 01:00:54.190 "enable_quickack": false, 01:00:54.190 "enable_recv_pipe": true, 01:00:54.190 "enable_zerocopy_send_client": false, 01:00:54.190 "enable_zerocopy_send_server": true, 01:00:54.190 "impl_name": "posix", 01:00:54.190 "recv_buf_size": 2097152, 01:00:54.190 "send_buf_size": 2097152, 01:00:54.190 "tls_version": 0, 01:00:54.190 "zerocopy_threshold": 0 01:00:54.190 } 01:00:54.190 } 01:00:54.190 ] 01:00:54.190 }, 01:00:54.190 { 01:00:54.190 "subsystem": "vmd", 01:00:54.190 "config": [] 01:00:54.190 }, 01:00:54.190 { 01:00:54.190 "subsystem": "accel", 01:00:54.190 "config": [ 01:00:54.190 { 01:00:54.190 "method": "accel_set_options", 01:00:54.191 "params": { 01:00:54.191 "buf_count": 2048, 01:00:54.191 "large_cache_size": 16, 01:00:54.191 "sequence_count": 2048, 01:00:54.191 "small_cache_size": 128, 01:00:54.191 "task_count": 2048 01:00:54.191 } 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "subsystem": "bdev", 01:00:54.191 "config": [ 01:00:54.191 { 01:00:54.191 "method": "bdev_set_options", 01:00:54.191 "params": { 01:00:54.191 "bdev_auto_examine": true, 01:00:54.191 "bdev_io_cache_size": 256, 01:00:54.191 "bdev_io_pool_size": 65535, 01:00:54.191 "iobuf_large_cache_size": 16, 01:00:54.191 "iobuf_small_cache_size": 128 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "bdev_raid_set_options", 01:00:54.191 "params": { 01:00:54.191 "process_max_bandwidth_mb_sec": 0, 01:00:54.191 "process_window_size_kb": 1024 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "bdev_iscsi_set_options", 01:00:54.191 "params": { 01:00:54.191 "timeout_sec": 30 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "bdev_nvme_set_options", 01:00:54.191 "params": { 01:00:54.191 "action_on_timeout": "none", 01:00:54.191 "allow_accel_sequence": false, 01:00:54.191 "arbitration_burst": 0, 01:00:54.191 "bdev_retry_count": 3, 01:00:54.191 "ctrlr_loss_timeout_sec": 0, 01:00:54.191 "delay_cmd_submit": true, 01:00:54.191 "dhchap_dhgroups": [ 01:00:54.191 "null", 01:00:54.191 "ffdhe2048", 01:00:54.191 "ffdhe3072", 01:00:54.191 "ffdhe4096", 01:00:54.191 "ffdhe6144", 01:00:54.191 "ffdhe8192" 01:00:54.191 ], 01:00:54.191 "dhchap_digests": [ 01:00:54.191 "sha256", 01:00:54.191 "sha384", 01:00:54.191 "sha512" 01:00:54.191 ], 01:00:54.191 "disable_auto_failback": false, 01:00:54.191 "fast_io_fail_timeout_sec": 0, 01:00:54.191 "generate_uuids": false, 01:00:54.191 "high_priority_weight": 0, 01:00:54.191 "io_path_stat": false, 01:00:54.191 "io_queue_requests": 0, 01:00:54.191 "keep_alive_timeout_ms": 10000, 01:00:54.191 "low_priority_weight": 0, 01:00:54.191 "medium_priority_weight": 0, 01:00:54.191 "nvme_adminq_poll_period_us": 10000, 01:00:54.191 "nvme_error_stat": false, 01:00:54.191 "nvme_ioq_poll_period_us": 0, 01:00:54.191 "rdma_cm_event_timeout_ms": 0, 01:00:54.191 "rdma_max_cq_size": 0, 01:00:54.191 "rdma_srq_size": 0, 01:00:54.191 "reconnect_delay_sec": 0, 01:00:54.191 "timeout_admin_us": 0, 01:00:54.191 "timeout_us": 0, 01:00:54.191 "transport_ack_timeout": 0, 01:00:54.191 "transport_retry_count": 4, 01:00:54.191 "transport_tos": 0 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "bdev_nvme_set_hotplug", 01:00:54.191 "params": { 01:00:54.191 "enable": false, 01:00:54.191 "period_us": 100000 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "bdev_malloc_create", 01:00:54.191 "params": { 01:00:54.191 "block_size": 4096, 01:00:54.191 "name": "malloc0", 01:00:54.191 "num_blocks": 8192, 01:00:54.191 "optimal_io_boundary": 0, 01:00:54.191 "physical_block_size": 4096, 01:00:54.191 "uuid": "84f31204-37d3-4c16-8a37-aaf9edc80a8a" 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "bdev_wait_for_examine" 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "subsystem": "nbd", 01:00:54.191 "config": [] 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "subsystem": "scheduler", 01:00:54.191 "config": [ 01:00:54.191 { 01:00:54.191 "method": "framework_set_scheduler", 01:00:54.191 "params": { 01:00:54.191 "name": "static" 01:00:54.191 } 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "subsystem": "nvmf", 01:00:54.191 "config": [ 01:00:54.191 { 01:00:54.191 "method": "nvmf_set_config", 01:00:54.191 "params": { 01:00:54.191 "admin_cmd_passthru": { 01:00:54.191 "identify_ctrlr": false 01:00:54.191 }, 01:00:54.191 "discovery_filter": "match_any" 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_set_max_subsystems", 01:00:54.191 "params": { 01:00:54.191 "max_subsystems": 1024 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_set_crdt", 01:00:54.191 "params": { 01:00:54.191 "crdt1": 0, 01:00:54.191 "crdt2": 0, 01:00:54.191 "crdt3": 0 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_create_transport", 01:00:54.191 "params": { 01:00:54.191 "abort_timeout_sec": 1, 01:00:54.191 "ack_timeout": 0, 01:00:54.191 "buf_cache_size": 4294967295, 01:00:54.191 "c2h_success": false, 01:00:54.191 "data_wr_pool_size": 0, 01:00:54.191 "dif_insert_or_strip": false, 01:00:54.191 "in_capsule_data_size": 4096, 01:00:54.191 "io_unit_size": 131072, 01:00:54.191 "max_aq_depth": 128, 01:00:54.191 "max_io_qpairs_per_ctrlr": 127, 01:00:54.191 "max_io_size": 131072, 01:00:54.191 "max_queue_depth": 128, 01:00:54.191 "num_shared_buffers": 511, 01:00:54.191 "sock_priority": 0, 01:00:54.191 "trtype": "TCP", 01:00:54.191 "zcopy": false 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_create_subsystem", 01:00:54.191 "params": { 01:00:54.191 "allow_any_host": false, 01:00:54.191 "ana_reporting": false, 01:00:54.191 "max_cntlid": 65519, 01:00:54.191 "max_namespaces": 32, 01:00:54.191 "min_cntlid": 1, 01:00:54.191 "model_number": "SPDK bdev Controller", 01:00:54.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.191 "serial_number": "00000000000000000000" 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_subsystem_add_host", 01:00:54.191 "params": { 01:00:54.191 "host": "nqn.2016-06.io.spdk:host1", 01:00:54.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.191 "psk": "key0" 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_subsystem_add_ns", 01:00:54.191 "params": { 01:00:54.191 "namespace": { 01:00:54.191 "bdev_name": "malloc0", 01:00:54.191 "nguid": "84F3120437D34C168A37AAF9EDC80A8A", 01:00:54.191 "no_auto_visible": false, 01:00:54.191 "nsid": 1, 01:00:54.191 "uuid": "84f31204-37d3-4c16-8a37-aaf9edc80a8a" 01:00:54.191 }, 01:00:54.191 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "nvmf_subsystem_add_listener", 01:00:54.191 "params": { 01:00:54.191 "listen_address": { 01:00:54.191 "adrfam": "IPv4", 01:00:54.191 "traddr": "10.0.0.2", 01:00:54.191 "trsvcid": "4420", 01:00:54.191 "trtype": "TCP" 01:00:54.191 }, 01:00:54.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.191 "secure_channel": false, 01:00:54.191 "sock_impl": "ssl" 01:00:54.191 } 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 }' 01:00:54.191 10:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:00:54.191 10:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 01:00:54.191 "subsystems": [ 01:00:54.191 { 01:00:54.191 "subsystem": "keyring", 01:00:54.191 "config": [ 01:00:54.191 { 01:00:54.191 "method": "keyring_file_add_key", 01:00:54.191 "params": { 01:00:54.191 "name": "key0", 01:00:54.191 "path": "/tmp/tmp.fbTvA9VFZS" 01:00:54.191 } 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "subsystem": "iobuf", 01:00:54.191 "config": [ 01:00:54.191 { 01:00:54.191 "method": "iobuf_set_options", 01:00:54.191 "params": { 01:00:54.191 "large_bufsize": 135168, 01:00:54.191 "large_pool_count": 1024, 01:00:54.191 "small_bufsize": 8192, 01:00:54.191 "small_pool_count": 8192 01:00:54.191 } 01:00:54.191 } 01:00:54.191 ] 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "subsystem": "sock", 01:00:54.191 "config": [ 01:00:54.191 { 01:00:54.191 "method": "sock_set_default_impl", 01:00:54.191 "params": { 01:00:54.191 "impl_name": "posix" 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "sock_impl_set_options", 01:00:54.191 "params": { 01:00:54.191 "enable_ktls": false, 01:00:54.191 "enable_placement_id": 0, 01:00:54.191 "enable_quickack": false, 01:00:54.191 "enable_recv_pipe": true, 01:00:54.191 "enable_zerocopy_send_client": false, 01:00:54.191 "enable_zerocopy_send_server": true, 01:00:54.191 "impl_name": "ssl", 01:00:54.191 "recv_buf_size": 4096, 01:00:54.191 "send_buf_size": 4096, 01:00:54.191 "tls_version": 0, 01:00:54.191 "zerocopy_threshold": 0 01:00:54.191 } 01:00:54.191 }, 01:00:54.191 { 01:00:54.191 "method": "sock_impl_set_options", 01:00:54.191 "params": { 01:00:54.191 "enable_ktls": false, 01:00:54.191 "enable_placement_id": 0, 01:00:54.191 "enable_quickack": false, 01:00:54.191 "enable_recv_pipe": true, 01:00:54.192 "enable_zerocopy_send_client": false, 01:00:54.192 "enable_zerocopy_send_server": true, 01:00:54.192 "impl_name": "posix", 01:00:54.192 "recv_buf_size": 2097152, 01:00:54.192 "send_buf_size": 2097152, 01:00:54.192 "tls_version": 0, 01:00:54.192 "zerocopy_threshold": 0 01:00:54.192 } 01:00:54.192 } 01:00:54.192 ] 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "subsystem": "vmd", 01:00:54.192 "config": [] 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "subsystem": "accel", 01:00:54.192 "config": [ 01:00:54.192 { 01:00:54.192 "method": "accel_set_options", 01:00:54.192 "params": { 01:00:54.192 "buf_count": 2048, 01:00:54.192 "large_cache_size": 16, 01:00:54.192 "sequence_count": 2048, 01:00:54.192 "small_cache_size": 128, 01:00:54.192 "task_count": 2048 01:00:54.192 } 01:00:54.192 } 01:00:54.192 ] 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "subsystem": "bdev", 01:00:54.192 "config": [ 01:00:54.192 { 01:00:54.192 "method": "bdev_set_options", 01:00:54.192 "params": { 01:00:54.192 "bdev_auto_examine": true, 01:00:54.192 "bdev_io_cache_size": 256, 01:00:54.192 "bdev_io_pool_size": 65535, 01:00:54.192 "iobuf_large_cache_size": 16, 01:00:54.192 "iobuf_small_cache_size": 128 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_raid_set_options", 01:00:54.192 "params": { 01:00:54.192 "process_max_bandwidth_mb_sec": 0, 01:00:54.192 "process_window_size_kb": 1024 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_iscsi_set_options", 01:00:54.192 "params": { 01:00:54.192 "timeout_sec": 30 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_nvme_set_options", 01:00:54.192 "params": { 01:00:54.192 "action_on_timeout": "none", 01:00:54.192 "allow_accel_sequence": false, 01:00:54.192 "arbitration_burst": 0, 01:00:54.192 "bdev_retry_count": 3, 01:00:54.192 "ctrlr_loss_timeout_sec": 0, 01:00:54.192 "delay_cmd_submit": true, 01:00:54.192 "dhchap_dhgroups": [ 01:00:54.192 "null", 01:00:54.192 "ffdhe2048", 01:00:54.192 "ffdhe3072", 01:00:54.192 "ffdhe4096", 01:00:54.192 "ffdhe6144", 01:00:54.192 "ffdhe8192" 01:00:54.192 ], 01:00:54.192 "dhchap_digests": [ 01:00:54.192 "sha256", 01:00:54.192 "sha384", 01:00:54.192 "sha512" 01:00:54.192 ], 01:00:54.192 "disable_auto_failback": false, 01:00:54.192 "fast_io_fail_timeout_sec": 0, 01:00:54.192 "generate_uuids": false, 01:00:54.192 "high_priority_weight": 0, 01:00:54.192 "io_path_stat": false, 01:00:54.192 "io_queue_requests": 512, 01:00:54.192 "keep_alive_timeout_ms": 10000, 01:00:54.192 "low_priority_weight": 0, 01:00:54.192 "medium_priority_weight": 0, 01:00:54.192 "nvme_adminq_poll_period_us": 10000, 01:00:54.192 "nvme_error_stat": false, 01:00:54.192 "nvme_ioq_poll_period_us": 0, 01:00:54.192 "rdma_cm_event_timeout_ms": 0, 01:00:54.192 "rdma_max_cq_size": 0, 01:00:54.192 "rdma_srq_size": 0, 01:00:54.192 "reconnect_delay_sec": 0, 01:00:54.192 "timeout_admin_us": 0, 01:00:54.192 "timeout_us": 0, 01:00:54.192 "transport_ack_timeout": 0, 01:00:54.192 "transport_retry_count": 4, 01:00:54.192 "transport_tos": 0 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_nvme_attach_controller", 01:00:54.192 "params": { 01:00:54.192 "adrfam": "IPv4", 01:00:54.192 "ctrlr_loss_timeout_sec": 0, 01:00:54.192 "ddgst": false, 01:00:54.192 "fast_io_fail_timeout_sec": 0, 01:00:54.192 "hdgst": false, 01:00:54.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:54.192 "name": "nvme0", 01:00:54.192 "prchk_guard": false, 01:00:54.192 "prchk_reftag": false, 01:00:54.192 "psk": "key0", 01:00:54.192 "reconnect_delay_sec": 0, 01:00:54.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.192 "traddr": "10.0.0.2", 01:00:54.192 "trsvcid": "4420", 01:00:54.192 "trtype": "TCP" 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_nvme_set_hotplug", 01:00:54.192 "params": { 01:00:54.192 "enable": false, 01:00:54.192 "period_us": 100000 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_enable_histogram", 01:00:54.192 "params": { 01:00:54.192 "enable": true, 01:00:54.192 "name": "nvme0n1" 01:00:54.192 } 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "method": "bdev_wait_for_examine" 01:00:54.192 } 01:00:54.192 ] 01:00:54.192 }, 01:00:54.192 { 01:00:54.192 "subsystem": "nbd", 01:00:54.192 "config": [] 01:00:54.192 } 01:00:54.192 ] 01:00:54.192 }' 01:00:54.192 10:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 101223 01:00:54.192 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101223 ']' 01:00:54.192 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101223 01:00:54.192 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:54.192 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:54.192 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101223 01:00:54.451 killing process with pid 101223 01:00:54.451 Received shutdown signal, test time was about 1.000000 seconds 01:00:54.451 01:00:54.451 Latency(us) 01:00:54.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:54.451 =================================================================================================================== 01:00:54.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101223' 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101223 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101223 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 101173 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101173 ']' 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101173 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101173 01:00:54.451 killing process with pid 101173 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101173' 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101173 01:00:54.451 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101173 01:00:54.722 10:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 01:00:54.722 10:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:54.722 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:54.722 10:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 01:00:54.722 "subsystems": [ 01:00:54.722 { 01:00:54.722 "subsystem": "keyring", 01:00:54.722 "config": [ 01:00:54.722 { 01:00:54.722 "method": "keyring_file_add_key", 01:00:54.722 "params": { 01:00:54.722 "name": "key0", 01:00:54.722 "path": "/tmp/tmp.fbTvA9VFZS" 01:00:54.722 } 01:00:54.722 } 01:00:54.722 ] 01:00:54.722 }, 01:00:54.722 { 01:00:54.722 "subsystem": "iobuf", 01:00:54.722 "config": [ 01:00:54.722 { 01:00:54.722 "method": "iobuf_set_options", 01:00:54.722 "params": { 01:00:54.722 "large_bufsize": 135168, 01:00:54.722 "large_pool_count": 1024, 01:00:54.722 "small_bufsize": 8192, 01:00:54.722 "small_pool_count": 8192 01:00:54.722 } 01:00:54.722 } 01:00:54.722 ] 01:00:54.722 }, 01:00:54.722 { 01:00:54.722 "subsystem": "sock", 01:00:54.722 "config": [ 01:00:54.722 { 01:00:54.722 "method": "sock_set_default_impl", 01:00:54.722 "params": { 01:00:54.722 "impl_name": "posix" 01:00:54.722 } 01:00:54.722 }, 01:00:54.722 { 01:00:54.722 "method": "sock_impl_set_options", 01:00:54.722 "params": { 01:00:54.722 "enable_ktls": false, 01:00:54.722 "enable_placement_id": 0, 01:00:54.722 "enable_quickack": false, 01:00:54.722 "enable_recv_pipe": true, 01:00:54.722 "enable_zerocopy_send_client": false, 01:00:54.722 "enable_zerocopy_send_server": true, 01:00:54.722 "impl_name": "ssl", 01:00:54.722 "recv_buf_size": 4096, 01:00:54.722 "send_buf_size": 4096, 01:00:54.722 "tls_version": 0, 01:00:54.722 "zerocopy_threshold": 0 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "sock_impl_set_options", 01:00:54.723 "params": { 01:00:54.723 "enable_ktls": false, 01:00:54.723 "enable_placement_id": 0, 01:00:54.723 "enable_quickack": false, 01:00:54.723 "enable_recv_pipe": true, 01:00:54.723 "enable_zerocopy_send_client": false, 01:00:54.723 "enable_zerocopy_send_server": true, 01:00:54.723 "impl_name": "posix", 01:00:54.723 "recv_buf_size": 2097152, 01:00:54.723 "send_buf_size": 2097152, 01:00:54.723 "tls_version": 0, 01:00:54.723 "zerocopy_threshold": 0 01:00:54.723 } 01:00:54.723 } 01:00:54.723 ] 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "subsystem": "vmd", 01:00:54.723 "config": [] 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "subsystem": "accel", 01:00:54.723 "config": [ 01:00:54.723 { 01:00:54.723 "method": "accel_set_options", 01:00:54.723 "params": { 01:00:54.723 "buf_count": 2048, 01:00:54.723 "large_cache_size": 16, 01:00:54.723 "sequence_count": 2048, 01:00:54.723 "small_cache_size": 128, 01:00:54.723 "task_count": 2048 01:00:54.723 } 01:00:54.723 } 01:00:54.723 ] 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "subsystem": "bdev", 01:00:54.723 "config": [ 01:00:54.723 { 01:00:54.723 "method": "bdev_set_options", 01:00:54.723 "params": { 01:00:54.723 "bdev_auto_examine": true, 01:00:54.723 "bdev_io_cache_size": 256, 01:00:54.723 "bdev_io_pool_size": 65535, 01:00:54.723 "iobuf_large_cache_size": 16, 01:00:54.723 "iobuf_small_cache_size": 128 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "bdev_raid_set_options", 01:00:54.723 "params": { 01:00:54.723 "process_max_bandwidth_mb_sec": 0, 01:00:54.723 "process_window_size_kb": 1024 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "bdev_iscsi_set_options", 01:00:54.723 "params": { 01:00:54.723 "timeout_sec": 30 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "bdev_nvme_set_options", 01:00:54.723 "params": { 01:00:54.723 "action_on_timeout": "none", 01:00:54.723 "allow_accel_sequence": false, 01:00:54.723 "arbitration_burst": 0, 01:00:54.723 "bdev_retry_count": 3, 01:00:54.723 "ctrlr_loss_timeout_sec": 0, 01:00:54.723 "delay_cmd_submit": true, 01:00:54.723 "dhchap_dhgroups": [ 01:00:54.723 "null", 01:00:54.723 "ffdhe2048", 01:00:54.723 "ffdhe3072", 01:00:54.723 "ffdhe4096", 01:00:54.723 "ffdhe6144", 01:00:54.723 "ffdhe8192" 01:00:54.723 ], 01:00:54.723 "dhchap_digests": [ 01:00:54.723 "sha256", 01:00:54.723 "sha384", 01:00:54.723 "sha512" 01:00:54.723 ], 01:00:54.723 "disable_auto_failback": false, 01:00:54.723 "fast_io_fail_timeout_sec": 0, 01:00:54.723 "generate_uuids": false, 01:00:54.723 "high_priority_weight": 0, 01:00:54.723 "io_path_stat": false, 01:00:54.723 "io_queue_requests": 0, 01:00:54.723 "keep_alive_timeout_ms": 10000, 01:00:54.723 "low_priority_weight": 0, 01:00:54.723 "medium_priority_weight": 0, 01:00:54.723 "nvme_adminq_poll_period_us": 10000, 01:00:54.723 "nvme_error_stat": false, 01:00:54.723 "nvme_ioq_poll_period_us": 0, 01:00:54.723 "rdma_cm_event_timeout_ms": 0, 01:00:54.723 "rdma_max_cq_size": 0, 01:00:54.723 "rdma_srq_size": 0, 01:00:54.723 "reconnect_delay_sec": 0, 01:00:54.723 "timeout_admin_us": 0, 01:00:54.723 "timeout_us": 0, 01:00:54.723 "transport_ack_timeout": 0, 01:00:54.723 "transport_retry_count": 4, 01:00:54.723 "transport_tos": 0 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "bdev_nvme_set_hotplug", 01:00:54.723 "params": { 01:00:54.723 "enable": false, 01:00:54.723 "period_us": 100000 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "bdev_malloc_create", 01:00:54.723 "params": { 01:00:54.723 "block_size": 4096, 01:00:54.723 "name": "malloc0", 01:00:54.723 "num_blocks": 8192, 01:00:54.723 "optimal_io_boundary": 0, 01:00:54.723 "physical_block_size": 4096, 01:00:54.723 "uuid": "84f31204-37d3-4c16-8a37-aaf9edc80a8a" 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "bdev_wait_for_examine" 01:00:54.723 } 01:00:54.723 ] 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "subsystem": "nbd", 01:00:54.723 "config": [] 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "subsystem": "scheduler", 01:00:54.723 "config": [ 01:00:54.723 { 01:00:54.723 "method": "framework_set_scheduler", 01:00:54.723 "params": { 01:00:54.723 "name": "static" 01:00:54.723 } 01:00:54.723 } 01:00:54.723 ] 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "subsystem": "nvmf", 01:00:54.723 "config": [ 01:00:54.723 { 01:00:54.723 "method": "nvmf_set_config", 01:00:54.723 "params": { 01:00:54.723 "admin_cmd_passthru": { 01:00:54.723 "identify_ctrlr": false 01:00:54.723 }, 01:00:54.723 "discovery_filter": "match_any" 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_set_max_subsystems", 01:00:54.723 "params": { 01:00:54.723 "max_subsystems": 1024 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_set_crdt", 01:00:54.723 "params": { 01:00:54.723 "crdt1": 0, 01:00:54.723 "crdt2": 0, 01:00:54.723 "crdt3": 0 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_create_transport", 01:00:54.723 "params": { 01:00:54.723 "abort_timeout_sec": 1, 01:00:54.723 "ack_timeout": 0, 01:00:54.723 "buf_cache_size": 4294967295, 01:00:54.723 "c2h_success": false, 01:00:54.723 "data_wr_pool_size": 0, 01:00:54.723 "dif_insert_or_strip": false, 01:00:54.723 "in_capsule_data_size": 4096, 01:00:54.723 "io_unit_size": 131072, 01:00:54.723 "max_aq_depth": 128, 01:00:54.723 "max_io_qpairs_per_ctrlr": 127, 01:00:54.723 "max_io_size": 131072, 01:00:54.723 "max_queue_depth": 128, 01:00:54.723 "num_shared_buffers": 511, 01:00:54.723 "sock_priority": 0, 01:00:54.723 "trtype": "TCP", 01:00:54.723 "zcopy": false 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_create_subsystem", 01:00:54.723 "params": { 01:00:54.723 "allow_any_host": false, 01:00:54.723 "ana_reporting": false, 01:00:54.723 "max_cntlid": 65519, 01:00:54.723 "max_namespaces": 32, 01:00:54.723 "min_cntlid": 1, 01:00:54.723 "model_number": "SPDK bdev Controller", 01:00:54.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.723 "serial_number": "00000000000000000000" 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_subsystem_add_host", 01:00:54.723 "params": { 01:00:54.723 "host": "nqn.2016-06.io.spdk:host1", 01:00:54.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.723 "psk": "key0" 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_subsystem_add_ns", 01:00:54.723 "params": { 01:00:54.723 "namespace": { 01:00:54.723 "bdev_name": "malloc0", 01:00:54.723 "nguid": "84F3120437D34C168A37AAF9EDC80A8A", 01:00:54.723 "no_auto_visible": false, 01:00:54.723 "nsid": 1, 01:00:54.723 "uuid": "84f31204-37d3-4c16-8a37-aaf9edc80a8a" 01:00:54.723 }, 01:00:54.723 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:00:54.723 } 01:00:54.723 }, 01:00:54.723 { 01:00:54.723 "method": "nvmf_subsystem_add_listener", 01:00:54.723 "params": { 01:00:54.723 "listen_address": { 01:00:54.723 "adrfam": "IPv4", 01:00:54.723 "traddr": "10.0.0.2", 01:00:54.723 "trsvcid": "4420", 01:00:54.723 "trtype": "TCP" 01:00:54.723 }, 01:00:54.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:00:54.723 "secure_channel": false, 01:00:54.723 "sock_impl": "ssl" 01:00:54.723 } 01:00:54.723 } 01:00:54.723 ] 01:00:54.723 } 01:00:54.723 ] 01:00:54.723 }' 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101308 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101308 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101308 ']' 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:54.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:54.723 10:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:54.723 [2024-07-22 10:56:02.593937] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:54.723 [2024-07-22 10:56:02.594001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:54.982 [2024-07-22 10:56:02.714601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:54.982 [2024-07-22 10:56:02.723548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:54.982 [2024-07-22 10:56:02.761883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:54.982 [2024-07-22 10:56:02.761933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:54.982 [2024-07-22 10:56:02.761942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:54.982 [2024-07-22 10:56:02.761949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:54.982 [2024-07-22 10:56:02.761971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:54.982 [2024-07-22 10:56:02.762046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:55.240 [2024-07-22 10:56:02.967613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:55.240 [2024-07-22 10:56:02.999518] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:55.240 [2024-07-22 10:56:02.999688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:55.498 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:55.498 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:55.498 10:56:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:55.498 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:55.498 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=101352 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 101352 /var/tmp/bdevperf.sock 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101352 ']' 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:55.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:55.758 10:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 01:00:55.758 "subsystems": [ 01:00:55.758 { 01:00:55.758 "subsystem": "keyring", 01:00:55.758 "config": [ 01:00:55.758 { 01:00:55.758 "method": "keyring_file_add_key", 01:00:55.758 "params": { 01:00:55.758 "name": "key0", 01:00:55.758 "path": "/tmp/tmp.fbTvA9VFZS" 01:00:55.758 } 01:00:55.758 } 01:00:55.758 ] 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "subsystem": "iobuf", 01:00:55.758 "config": [ 01:00:55.758 { 01:00:55.758 "method": "iobuf_set_options", 01:00:55.758 "params": { 01:00:55.758 "large_bufsize": 135168, 01:00:55.758 "large_pool_count": 1024, 01:00:55.758 "small_bufsize": 8192, 01:00:55.758 "small_pool_count": 8192 01:00:55.758 } 01:00:55.758 } 01:00:55.758 ] 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "subsystem": "sock", 01:00:55.758 "config": [ 01:00:55.758 { 01:00:55.758 "method": "sock_set_default_impl", 01:00:55.758 "params": { 01:00:55.758 "impl_name": "posix" 01:00:55.758 } 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "method": "sock_impl_set_options", 01:00:55.758 "params": { 01:00:55.758 "enable_ktls": false, 01:00:55.758 "enable_placement_id": 0, 01:00:55.758 "enable_quickack": false, 01:00:55.758 "enable_recv_pipe": true, 01:00:55.758 "enable_zerocopy_send_client": false, 01:00:55.758 "enable_zerocopy_send_server": true, 01:00:55.758 "impl_name": "ssl", 01:00:55.758 "recv_buf_size": 4096, 01:00:55.758 "send_buf_size": 4096, 01:00:55.758 "tls_version": 0, 01:00:55.758 "zerocopy_threshold": 0 01:00:55.758 } 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "method": "sock_impl_set_options", 01:00:55.758 "params": { 01:00:55.758 "enable_ktls": false, 01:00:55.758 "enable_placement_id": 0, 01:00:55.758 "enable_quickack": false, 01:00:55.758 "enable_recv_pipe": true, 01:00:55.758 "enable_zerocopy_send_client": false, 01:00:55.758 "enable_zerocopy_send_server": true, 01:00:55.758 "impl_name": "posix", 01:00:55.758 "recv_buf_size": 2097152, 01:00:55.758 "send_buf_size": 2097152, 01:00:55.758 "tls_version": 0, 01:00:55.758 "zerocopy_threshold": 0 01:00:55.758 } 01:00:55.758 } 01:00:55.758 ] 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "subsystem": "vmd", 01:00:55.758 "config": [] 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "subsystem": "accel", 01:00:55.758 "config": [ 01:00:55.758 { 01:00:55.758 "method": "accel_set_options", 01:00:55.758 "params": { 01:00:55.758 "buf_count": 2048, 01:00:55.758 "large_cache_size": 16, 01:00:55.758 "sequence_count": 2048, 01:00:55.758 "small_cache_size": 128, 01:00:55.758 "task_count": 2048 01:00:55.758 } 01:00:55.758 } 01:00:55.758 ] 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "subsystem": "bdev", 01:00:55.758 "config": [ 01:00:55.758 { 01:00:55.758 "method": "bdev_set_options", 01:00:55.758 "params": { 01:00:55.758 "bdev_auto_examine": true, 01:00:55.758 "bdev_io_cache_size": 256, 01:00:55.758 "bdev_io_pool_size": 65535, 01:00:55.758 "iobuf_large_cache_size": 16, 01:00:55.758 "iobuf_small_cache_size": 128 01:00:55.758 } 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "method": "bdev_raid_set_options", 01:00:55.758 "params": { 01:00:55.758 "process_max_bandwidth_mb_sec": 0, 01:00:55.758 "process_window_size_kb": 1024 01:00:55.758 } 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "method": "bdev_iscsi_set_options", 01:00:55.758 "params": { 01:00:55.758 "timeout_sec": 30 01:00:55.758 } 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "method": "bdev_nvme_set_options", 01:00:55.758 "params": { 01:00:55.758 "action_on_timeout": "none", 01:00:55.758 "allow_accel_sequence": false, 01:00:55.758 "arbitration_burst": 0, 01:00:55.758 "bdev_retry_count": 3, 01:00:55.758 "ctrlr_loss_timeout_sec": 0, 01:00:55.758 "delay_cmd_submit": true, 01:00:55.758 "dhchap_dhgroups": [ 01:00:55.758 "null", 01:00:55.758 "ffdhe2048", 01:00:55.758 "ffdhe3072", 01:00:55.758 "ffdhe4096", 01:00:55.758 "ffdhe6144", 01:00:55.758 "ffdhe8192" 01:00:55.758 ], 01:00:55.758 "dhchap_digests": [ 01:00:55.758 "sha256", 01:00:55.758 "sha384", 01:00:55.758 "sha512" 01:00:55.758 ], 01:00:55.758 "disable_auto_failback": false, 01:00:55.758 "fast_io_fail_timeout_sec": 0, 01:00:55.758 "generate_uuids": false, 01:00:55.758 "high_priority_weight": 0, 01:00:55.758 "io_path_stat": false, 01:00:55.758 "io_queue_requests": 512, 01:00:55.758 "keep_alive_timeout_ms": 10000, 01:00:55.758 "low_priority_weight": 0, 01:00:55.758 "medium_priority_weight": 0, 01:00:55.758 "nvme_adminq_poll_period_us": 10000, 01:00:55.758 "nvme_error_stat": false, 01:00:55.758 "nvme_ioq_poll_period_us": 0, 01:00:55.758 "rdma_cm_event_timeout_ms": 0, 01:00:55.758 "rdma_max_cq_size": 0, 01:00:55.758 "rdma_srq_size": 0, 01:00:55.758 "reconnect_delay_sec": 0, 01:00:55.758 "timeout_admin_us": 0, 01:00:55.758 "timeout_us": 0, 01:00:55.758 "transport_ack_timeout": 0, 01:00:55.758 "transport_retry_count": 4, 01:00:55.758 "transport_tos": 0 01:00:55.758 } 01:00:55.758 }, 01:00:55.758 { 01:00:55.758 "method": "bdev_nvme_attach_controller", 01:00:55.758 "params": { 01:00:55.758 "adrfam": "IPv4", 01:00:55.758 "ctrlr_loss_timeout_sec": 0, 01:00:55.758 "ddgst": false, 01:00:55.758 "fast_io_fail_timeout_sec": 0, 01:00:55.758 "hdgst": false, 01:00:55.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:55.758 "name": "nvme0", 01:00:55.758 "prchk_guard": false, 01:00:55.758 "prchk_reftag": false, 01:00:55.758 "psk": "key0", 01:00:55.759 "reconnect_delay_sec": 0, 01:00:55.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:55.759 "traddr": "10.0.0.2", 01:00:55.759 "trsvcid": "4420", 01:00:55.759 "trtype": "TCP" 01:00:55.759 } 01:00:55.759 }, 01:00:55.759 { 01:00:55.759 "method": "bdev_nvme_set_hotplug", 01:00:55.759 "params": { 01:00:55.759 "enable": false, 01:00:55.759 "period_us": 100000 01:00:55.759 } 01:00:55.759 }, 01:00:55.759 { 01:00:55.759 "method": "bdev_enable_histogram", 01:00:55.759 "params": { 01:00:55.759 "enable": true, 01:00:55.759 "name": "nvme0n1" 01:00:55.759 } 01:00:55.759 }, 01:00:55.759 { 01:00:55.759 "method": "bdev_wait_for_examine" 01:00:55.759 } 01:00:55.759 ] 01:00:55.759 }, 01:00:55.759 { 01:00:55.759 "subsystem": "nbd", 01:00:55.759 "config": [] 01:00:55.759 } 01:00:55.759 ] 01:00:55.759 }' 01:00:55.759 [2024-07-22 10:56:03.515919] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:55.759 [2024-07-22 10:56:03.515985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101352 ] 01:00:55.759 [2024-07-22 10:56:03.633833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:55.759 [2024-07-22 10:56:03.655592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:56.017 [2024-07-22 10:56:03.695440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:56.017 [2024-07-22 10:56:03.842584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:00:56.584 10:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:56.584 10:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:00:56.584 10:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:00:56.584 10:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 01:00:56.843 10:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:56.843 10:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:00:56.843 Running I/O for 1 seconds... 01:00:57.780 01:00:57.780 Latency(us) 01:00:57.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:57.780 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:00:57.780 Verification LBA range: start 0x0 length 0x2000 01:00:57.780 nvme0n1 : 1.01 5972.20 23.33 0.00 0.00 21276.07 4053.23 18844.89 01:00:57.780 =================================================================================================================== 01:00:57.780 Total : 5972.20 23.33 0.00 0.00 21276.07 4053.23 18844.89 01:00:57.780 0 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 01:00:57.780 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:00:57.780 nvmf_trace.0 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 101352 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101352 ']' 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101352 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101352 01:00:58.039 killing process with pid 101352 01:00:58.039 Received shutdown signal, test time was about 1.000000 seconds 01:00:58.039 01:00:58.039 Latency(us) 01:00:58.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:58.039 =================================================================================================================== 01:00:58.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101352' 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101352 01:00:58.039 10:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101352 01:00:58.297 10:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:00:58.297 10:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 01:00:58.297 10:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:00:58.298 rmmod nvme_tcp 01:00:58.298 rmmod nvme_fabrics 01:00:58.298 rmmod nvme_keyring 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 101308 ']' 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 101308 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101308 ']' 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101308 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101308 01:00:58.298 killing process with pid 101308 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101308' 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101308 01:00:58.298 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101308 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.t6IOsvv165 /tmp/tmp.czK1NvTr94 /tmp/tmp.fbTvA9VFZS 01:00:58.557 01:00:58.557 real 1m19.627s 01:00:58.557 user 1m58.604s 01:00:58.557 sys 0m29.812s 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:58.557 ************************************ 01:00:58.557 END TEST nvmf_tls 01:00:58.557 ************************************ 01:00:58.557 10:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:00:58.557 10:56:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:00:58.557 10:56:06 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:00:58.557 10:56:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:00:58.557 10:56:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:58.557 10:56:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:00:58.557 ************************************ 01:00:58.557 START TEST nvmf_fips 01:00:58.557 ************************************ 01:00:58.557 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:00:58.817 * Looking for test storage... 01:00:58.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 01:00:58.817 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 01:00:58.818 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 01:00:59.076 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 01:00:59.077 Error setting digest 01:00:59.077 00B2E721A37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 01:00:59.077 00B2E721A37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:00:59.077 Cannot find device "nvmf_tgt_br" 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:00:59.077 Cannot find device "nvmf_tgt_br2" 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:00:59.077 Cannot find device "nvmf_tgt_br" 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:00:59.077 Cannot find device "nvmf_tgt_br2" 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 01:00:59.077 10:56:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:59.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:59.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:00:59.335 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:00:59.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:59.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 01:00:59.336 01:00:59.336 --- 10.0.0.2 ping statistics --- 01:00:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:59.336 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:00:59.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:00:59.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 01:00:59.336 01:00:59.336 --- 10.0.0.3 ping statistics --- 01:00:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:59.336 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:00:59.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:59.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:00:59.336 01:00:59.336 --- 10.0.0.1 ping statistics --- 01:00:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:59.336 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:00:59.336 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101635 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101635 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101635 ']' 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:59.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:59.595 10:56:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:00:59.595 [2024-07-22 10:56:07.386448] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:00:59.595 [2024-07-22 10:56:07.386502] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:59.595 [2024-07-22 10:56:07.504043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:00:59.854 [2024-07-22 10:56:07.527683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:59.854 [2024-07-22 10:56:07.568189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:59.854 [2024-07-22 10:56:07.568235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:59.854 [2024-07-22 10:56:07.568244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:59.854 [2024-07-22 10:56:07.568252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:59.854 [2024-07-22 10:56:07.568258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:59.854 [2024-07-22 10:56:07.568298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:01:00.422 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:00.681 [2024-07-22 10:56:08.425725] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:00.681 [2024-07-22 10:56:08.441703] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:01:00.681 [2024-07-22 10:56:08.441981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:01:00.681 [2024-07-22 10:56:08.470705] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:01:00.681 malloc0 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=101687 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:01:00.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 101687 /var/tmp/bdevperf.sock 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101687 ']' 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:00.681 10:56:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:01:00.681 [2024-07-22 10:56:08.570771] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:01:00.681 [2024-07-22 10:56:08.570993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101687 ] 01:01:00.940 [2024-07-22 10:56:08.688789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:01:00.940 [2024-07-22 10:56:08.711942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:00.940 [2024-07-22 10:56:08.753427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:01:01.508 10:56:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:01.508 10:56:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 01:01:01.508 10:56:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:01:01.766 [2024-07-22 10:56:09.562874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:01:01.767 [2024-07-22 10:56:09.563164] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:01:01.767 TLSTESTn1 01:01:01.767 10:56:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:01:02.025 Running I/O for 10 seconds... 01:01:12.047 01:01:12.047 Latency(us) 01:01:12.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:12.047 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:01:12.047 Verification LBA range: start 0x0 length 0x2000 01:01:12.047 TLSTESTn1 : 10.01 5925.89 23.15 0.00 0.00 21565.74 4711.22 15581.25 01:01:12.047 =================================================================================================================== 01:01:12.047 Total : 5925.89 23.15 0.00 0.00 21565.74 4711.22 15581.25 01:01:12.047 0 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:01:12.047 nvmf_trace.0 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 101687 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101687 ']' 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101687 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101687 01:01:12.047 killing process with pid 101687 01:01:12.047 Received shutdown signal, test time was about 10.000000 seconds 01:01:12.047 01:01:12.047 Latency(us) 01:01:12.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:12.047 =================================================================================================================== 01:01:12.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101687' 01:01:12.047 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101687 01:01:12.048 [2024-07-22 10:56:19.902109] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:01:12.048 10:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101687 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:01:12.306 rmmod nvme_tcp 01:01:12.306 rmmod nvme_fabrics 01:01:12.306 rmmod nvme_keyring 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101635 ']' 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101635 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101635 ']' 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101635 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 01:01:12.306 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101635 01:01:12.565 killing process with pid 101635 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101635' 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101635 01:01:12.565 [2024-07-22 10:56:20.268193] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101635 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:01:12.565 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:01:12.824 ************************************ 01:01:12.824 END TEST nvmf_fips 01:01:12.824 ************************************ 01:01:12.824 01:01:12.824 real 0m14.064s 01:01:12.824 user 0m17.813s 01:01:12.824 sys 0m6.251s 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:01:12.824 10:56:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:01:12.824 10:56:20 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 01:01:12.824 10:56:20 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 01:01:12.824 10:56:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:01:12.824 10:56:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:12.824 10:56:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:01:12.824 ************************************ 01:01:12.824 START TEST nvmf_fuzz 01:01:12.824 ************************************ 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 01:01:12.824 * Looking for test storage... 01:01:12.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:01:12.824 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:01:13.083 Cannot find device "nvmf_tgt_br" 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:01:13.083 Cannot find device "nvmf_tgt_br2" 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:01:13.083 Cannot find device "nvmf_tgt_br" 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:01:13.083 Cannot find device "nvmf_tgt_br2" 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:13.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:13.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:13.083 10:56:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:13.083 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:01:13.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:13.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 01:01:13.342 01:01:13.342 --- 10.0.0.2 ping statistics --- 01:01:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:13.342 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:01:13.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:13.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 01:01:13.342 01:01:13.342 --- 10.0.0.3 ping statistics --- 01:01:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:13.342 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:13.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:13.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:01:13.342 01:01:13.342 --- 10.0.0.1 ping statistics --- 01:01:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:13.342 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=102025 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 102025 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 102025 ']' 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:13.342 10:56:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:14.277 Malloc0 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 01:01:14.277 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 01:01:14.535 Shutting down the fuzz application 01:01:14.535 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 01:01:14.792 Shutting down the fuzz application 01:01:14.792 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:01:14.792 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:14.792 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:01:15.050 rmmod nvme_tcp 01:01:15.050 rmmod nvme_fabrics 01:01:15.050 rmmod nvme_keyring 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 102025 ']' 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 102025 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 102025 ']' 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 102025 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102025 01:01:15.050 killing process with pid 102025 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102025' 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 102025 01:01:15.050 10:56:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 102025 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 01:01:15.307 ************************************ 01:01:15.307 END TEST nvmf_fuzz 01:01:15.307 ************************************ 01:01:15.307 01:01:15.307 real 0m2.607s 01:01:15.307 user 0m2.350s 01:01:15.307 sys 0m0.742s 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:15.307 10:56:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:01:15.565 10:56:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:01:15.565 10:56:23 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 01:01:15.565 10:56:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:01:15.565 10:56:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:15.565 10:56:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:01:15.565 ************************************ 01:01:15.565 START TEST nvmf_multiconnection 01:01:15.565 ************************************ 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 01:01:15.565 * Looking for test storage... 01:01:15.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:15.565 10:56:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:01:15.566 Cannot find device "nvmf_tgt_br" 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 01:01:15.566 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:01:15.824 Cannot find device "nvmf_tgt_br2" 01:01:15.824 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 01:01:15.824 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:01:15.824 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:01:15.825 Cannot find device "nvmf_tgt_br" 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:01:15.825 Cannot find device "nvmf_tgt_br2" 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:15.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:15.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:01:15.825 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:01:16.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:16.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:01:16.084 01:01:16.084 --- 10.0.0.2 ping statistics --- 01:01:16.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:16.084 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:01:16.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:16.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 01:01:16.084 01:01:16.084 --- 10.0.0.3 ping statistics --- 01:01:16.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:16.084 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:16.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:16.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:01:16.084 01:01:16.084 --- 10.0.0.1 ping statistics --- 01:01:16.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:16.084 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=102237 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 102237 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 102237 ']' 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:16.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:16.084 10:56:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:16.084 [2024-07-22 10:56:23.986944] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:01:16.084 [2024-07-22 10:56:23.987004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:16.342 [2024-07-22 10:56:24.105891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:01:16.342 [2024-07-22 10:56:24.129051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:01:16.342 [2024-07-22 10:56:24.170557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:16.342 [2024-07-22 10:56:24.170845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:16.342 [2024-07-22 10:56:24.171050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:16.342 [2024-07-22 10:56:24.171096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:16.342 [2024-07-22 10:56:24.171121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:16.342 [2024-07-22 10:56:24.171758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:16.342 [2024-07-22 10:56:24.171816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:01:16.342 [2024-07-22 10:56:24.171906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:16.342 [2024-07-22 10:56:24.171907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:01:16.910 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:16.910 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 01:01:16.910 10:56:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:01:16.910 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 01:01:16.910 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 [2024-07-22 10:56:24.890448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 Malloc1 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 [2024-07-22 10:56:24.967719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 Malloc2 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 Malloc3 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.170 Malloc4 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.170 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 Malloc5 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 Malloc6 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 Malloc7 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 01:01:17.429 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.430 Malloc8 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.430 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 Malloc9 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 Malloc10 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 Malloc11 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:17.688 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:01:17.946 10:56:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 01:01:17.946 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:17.946 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:17.946 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:17.946 10:56:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:19.849 10:56:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 01:01:20.108 10:56:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 01:01:20.108 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:20.108 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:20.108 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:20.108 10:56:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:22.012 10:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 01:01:22.270 10:56:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 01:01:22.270 10:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:22.270 10:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:22.270 10:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:22.270 10:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:24.209 10:56:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 01:01:24.467 10:56:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 01:01:24.467 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:24.467 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:24.467 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:24.467 10:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:26.994 10:56:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:28.893 10:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:31.427 10:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:33.346 10:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:33.346 10:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:33.346 10:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:33.346 10:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:35.880 10:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:37.778 10:56:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:01:40.312 10:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:01:42.216 10:56:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 01:01:42.216 [global] 01:01:42.216 thread=1 01:01:42.216 invalidate=1 01:01:42.216 rw=read 01:01:42.216 time_based=1 01:01:42.216 runtime=10 01:01:42.216 ioengine=libaio 01:01:42.216 direct=1 01:01:42.216 bs=262144 01:01:42.216 iodepth=64 01:01:42.216 norandommap=1 01:01:42.216 numjobs=1 01:01:42.216 01:01:42.216 [job0] 01:01:42.216 filename=/dev/nvme0n1 01:01:42.216 [job1] 01:01:42.216 filename=/dev/nvme10n1 01:01:42.216 [job2] 01:01:42.216 filename=/dev/nvme1n1 01:01:42.216 [job3] 01:01:42.216 filename=/dev/nvme2n1 01:01:42.216 [job4] 01:01:42.216 filename=/dev/nvme3n1 01:01:42.216 [job5] 01:01:42.216 filename=/dev/nvme4n1 01:01:42.216 [job6] 01:01:42.216 filename=/dev/nvme5n1 01:01:42.216 [job7] 01:01:42.216 filename=/dev/nvme6n1 01:01:42.216 [job8] 01:01:42.216 filename=/dev/nvme7n1 01:01:42.216 [job9] 01:01:42.216 filename=/dev/nvme8n1 01:01:42.216 [job10] 01:01:42.216 filename=/dev/nvme9n1 01:01:42.474 Could not set queue depth (nvme0n1) 01:01:42.474 Could not set queue depth (nvme10n1) 01:01:42.474 Could not set queue depth (nvme1n1) 01:01:42.474 Could not set queue depth (nvme2n1) 01:01:42.474 Could not set queue depth (nvme3n1) 01:01:42.474 Could not set queue depth (nvme4n1) 01:01:42.474 Could not set queue depth (nvme5n1) 01:01:42.474 Could not set queue depth (nvme6n1) 01:01:42.474 Could not set queue depth (nvme7n1) 01:01:42.474 Could not set queue depth (nvme8n1) 01:01:42.474 Could not set queue depth (nvme9n1) 01:01:42.474 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:42.474 fio-3.35 01:01:42.474 Starting 11 threads 01:01:54.655 01:01:54.655 job0: (groupid=0, jobs=1): err= 0: pid=102714: Mon Jul 22 10:57:00 2024 01:01:54.655 read: IOPS=557, BW=139MiB/s (146MB/s)(1410MiB/10123msec) 01:01:54.655 slat (usec): min=15, max=139393, avg=1734.68, stdev=9404.28 01:01:54.655 clat (msec): min=12, max=296, avg=112.84, stdev=76.14 01:01:54.655 lat (msec): min=12, max=326, avg=114.58, stdev=77.80 01:01:54.655 clat percentiles (msec): 01:01:54.655 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 26], 01:01:54.655 | 30.00th=[ 29], 40.00th=[ 39], 50.00th=[ 153], 60.00th=[ 167], 01:01:54.655 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 199], 01:01:54.655 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 288], 99.95th=[ 292], 01:01:54.655 | 99.99th=[ 296] 01:01:54.655 bw ( KiB/s): min=71680, max=583536, per=6.98%, avg=142698.25, stdev=150905.69, samples=20 01:01:54.655 iops : min= 280, max= 2279, avg=557.25, stdev=589.35, samples=20 01:01:54.655 lat (msec) : 20=5.66%, 50=36.67%, 100=0.09%, 250=57.00%, 500=0.59% 01:01:54.655 cpu : usr=0.24%, sys=2.69%, ctx=1284, majf=0, minf=4097 01:01:54.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 01:01:54.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.655 issued rwts: total=5640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.655 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.655 job1: (groupid=0, jobs=1): err= 0: pid=102716: Mon Jul 22 10:57:00 2024 01:01:54.655 read: IOPS=349, BW=87.3MiB/s (91.5MB/s)(884MiB/10127msec) 01:01:54.655 slat (usec): min=16, max=90764, avg=2773.85, stdev=9006.62 01:01:54.656 clat (msec): min=16, max=302, avg=180.19, stdev=30.79 01:01:54.656 lat (msec): min=19, max=318, avg=182.96, stdev=32.28 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 38], 5.00th=[ 140], 10.00th=[ 153], 20.00th=[ 165], 01:01:54.656 | 30.00th=[ 171], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 01:01:54.656 | 70.00th=[ 194], 80.00th=[ 201], 90.00th=[ 207], 95.00th=[ 215], 01:01:54.656 | 99.00th=[ 245], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 305], 01:01:54.656 | 99.99th=[ 305] 01:01:54.656 bw ( KiB/s): min=74752, max=108544, per=4.34%, avg=88828.65, stdev=9753.06, samples=20 01:01:54.656 iops : min= 292, max= 424, avg=346.90, stdev=38.11, samples=20 01:01:54.656 lat (msec) : 20=0.06%, 50=1.78%, 100=0.31%, 250=97.06%, 500=0.79% 01:01:54.656 cpu : usr=0.22%, sys=1.88%, ctx=949, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=3535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job2: (groupid=0, jobs=1): err= 0: pid=102717: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=2157, BW=539MiB/s (566MB/s)(5403MiB/10016msec) 01:01:54.656 slat (usec): min=15, max=84780, avg=446.44, stdev=2369.12 01:01:54.656 clat (usec): min=1300, max=261847, avg=29140.42, stdev=24086.55 01:01:54.656 lat (usec): min=1361, max=261904, avg=29586.86, stdev=24481.38 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 9], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 20], 01:01:54.656 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 26], 01:01:54.656 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 40], 95.00th=[ 73], 01:01:54.656 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 247], 99.95th=[ 249], 01:01:54.656 | 99.99th=[ 249] 01:01:54.656 bw ( KiB/s): min=87552, max=714240, per=26.96%, avg=551397.55, stdev=209906.32, samples=20 01:01:54.656 iops : min= 342, max= 2790, avg=2153.80, stdev=819.97, samples=20 01:01:54.656 lat (msec) : 2=0.06%, 4=0.13%, 10=1.29%, 20=23.16%, 50=67.83% 01:01:54.656 lat (msec) : 100=5.73%, 250=1.80%, 500=0.01% 01:01:54.656 cpu : usr=0.96%, sys=10.19%, ctx=6110, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=21613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job3: (groupid=0, jobs=1): err= 0: pid=102718: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=355, BW=88.8MiB/s (93.1MB/s)(899MiB/10126msec) 01:01:54.656 slat (usec): min=16, max=118858, avg=2783.69, stdev=11285.51 01:01:54.656 clat (msec): min=30, max=289, avg=177.02, stdev=28.09 01:01:54.656 lat (msec): min=33, max=300, avg=179.80, stdev=30.28 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 78], 5.00th=[ 140], 10.00th=[ 148], 20.00th=[ 159], 01:01:54.656 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 01:01:54.656 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 207], 95.00th=[ 218], 01:01:54.656 | 99.00th=[ 253], 99.50th=[ 266], 99.90th=[ 288], 99.95th=[ 288], 01:01:54.656 | 99.99th=[ 292] 01:01:54.656 bw ( KiB/s): min=68608, max=117013, per=4.42%, avg=90346.05, stdev=11891.88, samples=20 01:01:54.656 iops : min= 268, max= 457, avg=352.85, stdev=46.45, samples=20 01:01:54.656 lat (msec) : 50=0.72%, 100=1.39%, 250=96.72%, 500=1.17% 01:01:54.656 cpu : usr=0.15%, sys=1.99%, ctx=958, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=3595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job4: (groupid=0, jobs=1): err= 0: pid=102719: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=350, BW=87.7MiB/s (92.0MB/s)(888MiB/10121msec) 01:01:54.656 slat (usec): min=16, max=97248, avg=2807.16, stdev=9237.16 01:01:54.656 clat (msec): min=27, max=299, avg=179.14, stdev=28.75 01:01:54.656 lat (msec): min=31, max=309, avg=181.95, stdev=30.14 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 59], 5.00th=[ 140], 10.00th=[ 150], 20.00th=[ 161], 01:01:54.656 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 01:01:54.656 | 70.00th=[ 190], 80.00th=[ 197], 90.00th=[ 211], 95.00th=[ 226], 01:01:54.656 | 99.00th=[ 245], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 300], 01:01:54.656 | 99.99th=[ 300] 01:01:54.656 bw ( KiB/s): min=66560, max=112926, per=4.36%, avg=89266.40, stdev=9809.76, samples=20 01:01:54.656 iops : min= 260, max= 441, avg=348.55, stdev=38.31, samples=20 01:01:54.656 lat (msec) : 50=0.25%, 100=1.58%, 250=97.18%, 500=0.99% 01:01:54.656 cpu : usr=0.19%, sys=1.94%, ctx=866, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=3551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job5: (groupid=0, jobs=1): err= 0: pid=102720: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=1129, BW=282MiB/s (296MB/s)(2833MiB/10036msec) 01:01:54.656 slat (usec): min=16, max=115016, avg=848.17, stdev=3788.85 01:01:54.656 clat (msec): min=2, max=314, avg=55.73, stdev=30.79 01:01:54.656 lat (msec): min=2, max=314, avg=56.57, stdev=31.33 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 37], 01:01:54.656 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 55], 01:01:54.656 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 92], 01:01:54.656 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 305], 99.95th=[ 305], 01:01:54.656 | 99.99th=[ 313] 01:01:54.656 bw ( KiB/s): min=77824, max=584704, per=13.68%, avg=279833.11, stdev=116182.64, samples=19 01:01:54.656 iops : min= 304, max= 2284, avg=1093.05, stdev=453.86, samples=19 01:01:54.656 lat (msec) : 4=0.02%, 10=0.39%, 20=3.45%, 50=44.87%, 100=47.44% 01:01:54.656 lat (msec) : 250=3.57%, 500=0.26% 01:01:54.656 cpu : usr=0.62%, sys=5.62%, ctx=2225, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=11332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job6: (groupid=0, jobs=1): err= 0: pid=102721: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=353, BW=88.5MiB/s (92.8MB/s)(895MiB/10119msec) 01:01:54.656 slat (usec): min=14, max=118152, avg=2716.11, stdev=11144.14 01:01:54.656 clat (msec): min=16, max=295, avg=177.69, stdev=32.54 01:01:54.656 lat (msec): min=16, max=347, avg=180.41, stdev=34.60 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 23], 5.00th=[ 144], 10.00th=[ 150], 20.00th=[ 161], 01:01:54.656 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 01:01:54.656 | 70.00th=[ 188], 80.00th=[ 197], 90.00th=[ 207], 95.00th=[ 228], 01:01:54.656 | 99.00th=[ 255], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 296], 01:01:54.656 | 99.99th=[ 296] 01:01:54.656 bw ( KiB/s): min=69120, max=108838, per=4.39%, avg=89757.85, stdev=10182.44, samples=20 01:01:54.656 iops : min= 270, max= 425, avg=350.50, stdev=39.81, samples=20 01:01:54.656 lat (msec) : 20=0.50%, 50=1.70%, 100=0.06%, 250=96.54%, 500=1.20% 01:01:54.656 cpu : usr=0.20%, sys=1.90%, ctx=803, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=3581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job7: (groupid=0, jobs=1): err= 0: pid=102722: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=1198, BW=300MiB/s (314MB/s)(3000MiB/10017msec) 01:01:54.656 slat (usec): min=14, max=58781, avg=810.14, stdev=3180.36 01:01:54.656 clat (msec): min=13, max=128, avg=52.47, stdev=19.33 01:01:54.656 lat (msec): min=13, max=140, avg=53.28, stdev=19.71 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 37], 01:01:54.656 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 55], 01:01:54.656 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 87], 01:01:54.656 | 99.00th=[ 107], 99.50th=[ 116], 99.90th=[ 123], 99.95th=[ 128], 01:01:54.656 | 99.99th=[ 129] 01:01:54.656 bw ( KiB/s): min=171688, max=544702, per=14.17%, avg=289757.79, stdev=82572.39, samples=19 01:01:54.656 iops : min= 670, max= 2127, avg=1131.68, stdev=322.48, samples=19 01:01:54.656 lat (msec) : 20=1.82%, 50=47.81%, 100=48.85%, 250=1.51% 01:01:54.656 cpu : usr=0.55%, sys=5.92%, ctx=2785, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.656 issued rwts: total=12001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.656 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.656 job8: (groupid=0, jobs=1): err= 0: pid=102723: Mon Jul 22 10:57:00 2024 01:01:54.656 read: IOPS=613, BW=153MiB/s (161MB/s)(1541MiB/10046msec) 01:01:54.656 slat (usec): min=15, max=167874, avg=1570.60, stdev=7585.52 01:01:54.656 clat (msec): min=6, max=296, avg=102.47, stdev=76.38 01:01:54.656 lat (msec): min=6, max=342, avg=104.04, stdev=77.86 01:01:54.656 clat percentiles (msec): 01:01:54.656 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 27], 01:01:54.656 | 30.00th=[ 31], 40.00th=[ 45], 50.00th=[ 58], 60.00th=[ 157], 01:01:54.656 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 199], 95.00th=[ 207], 01:01:54.656 | 99.00th=[ 230], 99.50th=[ 245], 99.90th=[ 275], 99.95th=[ 275], 01:01:54.656 | 99.99th=[ 296] 01:01:54.656 bw ( KiB/s): min=70003, max=612662, per=7.63%, avg=156052.35, stdev=158503.85, samples=20 01:01:54.656 iops : min= 273, max= 2393, avg=609.45, stdev=619.16, samples=20 01:01:54.656 lat (msec) : 10=0.37%, 20=5.18%, 50=39.84%, 100=8.03%, 250=46.14% 01:01:54.656 lat (msec) : 500=0.44% 01:01:54.656 cpu : usr=0.20%, sys=3.04%, ctx=1581, majf=0, minf=4097 01:01:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:01:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.657 issued rwts: total=6164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.657 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.657 job9: (groupid=0, jobs=1): err= 0: pid=102724: Mon Jul 22 10:57:00 2024 01:01:54.657 read: IOPS=360, BW=90.1MiB/s (94.5MB/s)(913MiB/10125msec) 01:01:54.657 slat (usec): min=17, max=110011, avg=2667.15, stdev=9643.48 01:01:54.657 clat (usec): min=1728, max=298669, avg=174414.23, stdev=38092.84 01:01:54.657 lat (usec): min=1831, max=309660, avg=177081.38, stdev=39670.79 01:01:54.657 clat percentiles (msec): 01:01:54.657 | 1.00th=[ 22], 5.00th=[ 116], 10.00th=[ 146], 20.00th=[ 163], 01:01:54.657 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 01:01:54.657 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 205], 95.00th=[ 213], 01:01:54.657 | 99.00th=[ 245], 99.50th=[ 266], 99.90th=[ 300], 99.95th=[ 300], 01:01:54.657 | 99.99th=[ 300] 01:01:54.657 bw ( KiB/s): min=68096, max=133386, per=4.49%, avg=91794.25, stdev=13425.95, samples=20 01:01:54.657 iops : min= 266, max= 521, avg=358.50, stdev=52.45, samples=20 01:01:54.657 lat (msec) : 2=0.03%, 4=0.08%, 10=0.60%, 20=0.08%, 50=3.34% 01:01:54.657 lat (msec) : 100=0.58%, 250=94.38%, 500=0.90% 01:01:54.657 cpu : usr=0.25%, sys=2.01%, ctx=896, majf=0, minf=4097 01:01:54.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 01:01:54.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.657 issued rwts: total=3650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.657 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.657 job10: (groupid=0, jobs=1): err= 0: pid=102725: Mon Jul 22 10:57:00 2024 01:01:54.657 read: IOPS=617, BW=154MiB/s (162MB/s)(1563MiB/10126msec) 01:01:54.657 slat (usec): min=20, max=134485, avg=1573.34, stdev=6567.94 01:01:54.657 clat (msec): min=10, max=305, avg=101.82, stdev=61.90 01:01:54.657 lat (msec): min=10, max=350, avg=103.39, stdev=63.07 01:01:54.657 clat percentiles (msec): 01:01:54.657 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 45], 20.00th=[ 55], 01:01:54.657 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 83], 01:01:54.657 | 70.00th=[ 96], 80.00th=[ 188], 90.00th=[ 201], 95.00th=[ 209], 01:01:54.657 | 99.00th=[ 239], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 300], 01:01:54.657 | 99.99th=[ 305] 01:01:54.657 bw ( KiB/s): min=70144, max=348672, per=7.74%, avg=158361.85, stdev=90919.57, samples=20 01:01:54.657 iops : min= 274, max= 1362, avg=618.60, stdev=355.35, samples=20 01:01:54.657 lat (msec) : 20=1.07%, 50=13.48%, 100=56.49%, 250=28.36%, 500=0.59% 01:01:54.657 cpu : usr=0.33%, sys=3.37%, ctx=1520, majf=0, minf=4097 01:01:54.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:01:54.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:54.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:01:54.657 issued rwts: total=6252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:54.657 latency : target=0, window=0, percentile=100.00%, depth=64 01:01:54.657 01:01:54.657 Run status group 0 (all jobs): 01:01:54.657 READ: bw=1997MiB/s (2095MB/s), 87.3MiB/s-539MiB/s (91.5MB/s-566MB/s), io=19.8GiB (21.2GB), run=10016-10127msec 01:01:54.657 01:01:54.657 Disk stats (read/write): 01:01:54.657 nvme0n1: ios=11206/0, merge=0/0, ticks=1240937/0, in_queue=1240937, util=97.58% 01:01:54.657 nvme10n1: ios=6973/0, merge=0/0, ticks=1240045/0, in_queue=1240045, util=97.72% 01:01:54.657 nvme1n1: ios=42178/0, merge=0/0, ticks=1151129/0, in_queue=1151129, util=97.47% 01:01:54.657 nvme2n1: ios=7096/0, merge=0/0, ticks=1237358/0, in_queue=1237358, util=98.01% 01:01:54.657 nvme3n1: ios=7022/0, merge=0/0, ticks=1241194/0, in_queue=1241194, util=98.02% 01:01:54.657 nvme4n1: ios=21995/0, merge=0/0, ticks=1198074/0, in_queue=1198074, util=98.01% 01:01:54.657 nvme5n1: ios=7048/0, merge=0/0, ticks=1232965/0, in_queue=1232965, util=98.28% 01:01:54.657 nvme6n1: ios=22932/0, merge=0/0, ticks=1201051/0, in_queue=1201051, util=98.10% 01:01:54.657 nvme7n1: ios=11673/0, merge=0/0, ticks=1209181/0, in_queue=1209181, util=98.26% 01:01:54.657 nvme8n1: ios=7221/0, merge=0/0, ticks=1241578/0, in_queue=1241578, util=98.76% 01:01:54.657 nvme9n1: ios=12414/0, merge=0/0, ticks=1233967/0, in_queue=1233967, util=98.80% 01:01:54.657 10:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 01:01:54.657 [global] 01:01:54.657 thread=1 01:01:54.657 invalidate=1 01:01:54.657 rw=randwrite 01:01:54.657 time_based=1 01:01:54.657 runtime=10 01:01:54.657 ioengine=libaio 01:01:54.657 direct=1 01:01:54.657 bs=262144 01:01:54.657 iodepth=64 01:01:54.657 norandommap=1 01:01:54.657 numjobs=1 01:01:54.657 01:01:54.657 [job0] 01:01:54.657 filename=/dev/nvme0n1 01:01:54.657 [job1] 01:01:54.657 filename=/dev/nvme10n1 01:01:54.657 [job2] 01:01:54.657 filename=/dev/nvme1n1 01:01:54.657 [job3] 01:01:54.657 filename=/dev/nvme2n1 01:01:54.657 [job4] 01:01:54.657 filename=/dev/nvme3n1 01:01:54.657 [job5] 01:01:54.657 filename=/dev/nvme4n1 01:01:54.657 [job6] 01:01:54.657 filename=/dev/nvme5n1 01:01:54.657 [job7] 01:01:54.657 filename=/dev/nvme6n1 01:01:54.657 [job8] 01:01:54.657 filename=/dev/nvme7n1 01:01:54.657 [job9] 01:01:54.657 filename=/dev/nvme8n1 01:01:54.657 [job10] 01:01:54.657 filename=/dev/nvme9n1 01:01:54.657 Could not set queue depth (nvme0n1) 01:01:54.657 Could not set queue depth (nvme10n1) 01:01:54.657 Could not set queue depth (nvme1n1) 01:01:54.657 Could not set queue depth (nvme2n1) 01:01:54.657 Could not set queue depth (nvme3n1) 01:01:54.657 Could not set queue depth (nvme4n1) 01:01:54.657 Could not set queue depth (nvme5n1) 01:01:54.657 Could not set queue depth (nvme6n1) 01:01:54.657 Could not set queue depth (nvme7n1) 01:01:54.657 Could not set queue depth (nvme8n1) 01:01:54.657 Could not set queue depth (nvme9n1) 01:01:54.657 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:01:54.657 fio-3.35 01:01:54.657 Starting 11 threads 01:02:04.635 01:02:04.635 job0: (groupid=0, jobs=1): err= 0: pid=102925: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=469, BW=117MiB/s (123MB/s)(1188MiB/10127msec); 0 zone resets 01:02:04.635 slat (usec): min=18, max=53372, avg=2040.52, stdev=4038.97 01:02:04.635 clat (usec): min=1504, max=275692, avg=134309.90, stdev=42568.48 01:02:04.635 lat (usec): min=1535, max=275757, avg=136350.42, stdev=43072.15 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 87], 20.00th=[ 122], 01:02:04.635 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 134], 60.00th=[ 150], 01:02:04.635 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 188], 01:02:04.635 | 99.00th=[ 205], 99.50th=[ 211], 99.90th=[ 268], 99.95th=[ 275], 01:02:04.635 | 99.99th=[ 275] 01:02:04.635 bw ( KiB/s): min=88064, max=212055, per=6.67%, avg=120013.95, stdev=30659.67, samples=20 01:02:04.635 iops : min= 344, max= 828, avg=468.70, stdev=119.63, samples=20 01:02:04.635 lat (msec) : 2=0.08%, 4=0.38%, 10=1.89%, 20=2.48%, 50=2.00% 01:02:04.635 lat (msec) : 100=7.11%, 250=85.79%, 500=0.25% 01:02:04.635 cpu : usr=1.05%, sys=1.80%, ctx=4740, majf=0, minf=1 01:02:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 01:02:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.635 issued rwts: total=0,4751,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.635 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.635 job1: (groupid=0, jobs=1): err= 0: pid=102926: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=523, BW=131MiB/s (137MB/s)(1319MiB/10082msec); 0 zone resets 01:02:04.635 slat (usec): min=19, max=30162, avg=1820.04, stdev=3322.22 01:02:04.635 clat (msec): min=5, max=214, avg=120.46, stdev=33.10 01:02:04.635 lat (msec): min=6, max=214, avg=122.28, stdev=33.55 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 27], 5.00th=[ 80], 10.00th=[ 89], 20.00th=[ 92], 01:02:04.635 | 30.00th=[ 95], 40.00th=[ 110], 50.00th=[ 125], 60.00th=[ 129], 01:02:04.635 | 70.00th=[ 133], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 01:02:04.635 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 207], 99.95th=[ 211], 01:02:04.635 | 99.99th=[ 215] 01:02:04.635 bw ( KiB/s): min=98304, max=196096, per=7.41%, avg=133427.20, stdev=30977.79, samples=20 01:02:04.635 iops : min= 384, max= 766, avg=521.20, stdev=121.01, samples=20 01:02:04.635 lat (msec) : 10=0.13%, 20=0.32%, 50=2.58%, 100=34.58%, 250=62.39% 01:02:04.635 cpu : usr=1.34%, sys=1.78%, ctx=7481, majf=0, minf=1 01:02:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:02:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.635 issued rwts: total=0,5275,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.635 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.635 job2: (groupid=0, jobs=1): err= 0: pid=102931: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=735, BW=184MiB/s (193MB/s)(1855MiB/10081msec); 0 zone resets 01:02:04.635 slat (usec): min=19, max=27303, avg=1254.95, stdev=2413.27 01:02:04.635 clat (msec): min=2, max=182, avg=85.68, stdev=30.93 01:02:04.635 lat (msec): min=3, max=185, avg=86.94, stdev=31.37 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 18], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 62], 01:02:04.635 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 87], 60.00th=[ 92], 01:02:04.635 | 70.00th=[ 95], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 131], 01:02:04.635 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 180], 01:02:04.635 | 99.99th=[ 184] 01:02:04.635 bw ( KiB/s): min=114176, max=280064, per=10.46%, avg=188151.15, stdev=58588.38, samples=20 01:02:04.635 iops : min= 446, max= 1094, avg=734.80, stdev=228.86, samples=20 01:02:04.635 lat (msec) : 4=0.03%, 10=0.34%, 20=1.06%, 50=3.64%, 100=68.35% 01:02:04.635 lat (msec) : 250=26.58% 01:02:04.635 cpu : usr=1.71%, sys=2.58%, ctx=10094, majf=0, minf=1 01:02:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:02:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.635 issued rwts: total=0,7419,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.635 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.635 job3: (groupid=0, jobs=1): err= 0: pid=102939: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=595, BW=149MiB/s (156MB/s)(1502MiB/10081msec); 0 zone resets 01:02:04.635 slat (usec): min=20, max=21762, avg=1640.63, stdev=2879.56 01:02:04.635 clat (msec): min=8, max=191, avg=105.71, stdev=22.49 01:02:04.635 lat (msec): min=8, max=191, avg=107.35, stdev=22.74 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 53], 5.00th=[ 87], 10.00th=[ 87], 20.00th=[ 90], 01:02:04.635 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 100], 01:02:04.635 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 148], 01:02:04.635 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 192], 01:02:04.635 | 99.99th=[ 192] 01:02:04.635 bw ( KiB/s): min=101376, max=181248, per=8.45%, avg=152140.80, stdev=26791.52, samples=20 01:02:04.635 iops : min= 396, max= 708, avg=594.30, stdev=104.65, samples=20 01:02:04.635 lat (msec) : 10=0.05%, 20=0.18%, 50=0.75%, 100=59.36%, 250=39.66% 01:02:04.635 cpu : usr=1.26%, sys=2.29%, ctx=7912, majf=0, minf=1 01:02:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:02:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.635 issued rwts: total=0,6006,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.635 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.635 job4: (groupid=0, jobs=1): err= 0: pid=102940: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=488, BW=122MiB/s (128MB/s)(1236MiB/10126msec); 0 zone resets 01:02:04.635 slat (usec): min=14, max=35425, avg=1959.69, stdev=3791.63 01:02:04.635 clat (usec): min=953, max=266497, avg=129040.33, stdev=42944.15 01:02:04.635 lat (usec): min=1603, max=266566, avg=131000.03, stdev=43544.93 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 4], 5.00th=[ 44], 10.00th=[ 62], 20.00th=[ 120], 01:02:04.635 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 144], 01:02:04.635 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 186], 01:02:04.635 | 99.00th=[ 192], 99.50th=[ 213], 99.90th=[ 257], 99.95th=[ 257], 01:02:04.635 | 99.99th=[ 268] 01:02:04.635 bw ( KiB/s): min=88064, max=262656, per=6.95%, avg=125005.50, stdev=42794.29, samples=20 01:02:04.635 iops : min= 344, max= 1026, avg=488.05, stdev=167.27, samples=20 01:02:04.635 lat (usec) : 1000=0.02% 01:02:04.635 lat (msec) : 2=0.34%, 4=0.79%, 10=0.61%, 20=0.79%, 50=3.26% 01:02:04.635 lat (msec) : 100=12.76%, 250=81.25%, 500=0.18% 01:02:04.635 cpu : usr=1.04%, sys=1.92%, ctx=5924, majf=0, minf=1 01:02:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 01:02:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.635 issued rwts: total=0,4945,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.635 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.635 job5: (groupid=0, jobs=1): err= 0: pid=102941: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=612, BW=153MiB/s (161MB/s)(1546MiB/10085msec); 0 zone resets 01:02:04.635 slat (usec): min=16, max=40268, avg=1564.88, stdev=2870.95 01:02:04.635 clat (msec): min=6, max=226, avg=102.77, stdev=24.61 01:02:04.635 lat (msec): min=6, max=234, avg=104.33, stdev=24.85 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 33], 5.00th=[ 74], 10.00th=[ 86], 20.00th=[ 90], 01:02:04.635 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 97], 01:02:04.635 | 70.00th=[ 120], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 133], 01:02:04.635 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 226], 01:02:04.635 | 99.99th=[ 228] 01:02:04.635 bw ( KiB/s): min=112640, max=194560, per=8.71%, avg=156750.60, stdev=27159.05, samples=20 01:02:04.635 iops : min= 440, max= 760, avg=612.10, stdev=106.11, samples=20 01:02:04.635 lat (msec) : 10=0.19%, 20=0.34%, 50=1.05%, 100=63.36%, 250=35.05% 01:02:04.635 cpu : usr=1.31%, sys=2.03%, ctx=7949, majf=0, minf=1 01:02:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:02:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.635 issued rwts: total=0,6182,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.635 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.635 job6: (groupid=0, jobs=1): err= 0: pid=102943: Mon Jul 22 10:57:11 2024 01:02:04.635 write: IOPS=609, BW=152MiB/s (160MB/s)(1535MiB/10077msec); 0 zone resets 01:02:04.635 slat (usec): min=20, max=48490, avg=1609.76, stdev=2907.26 01:02:04.635 clat (usec): min=1588, max=222861, avg=103433.21, stdev=24246.65 01:02:04.635 lat (usec): min=1667, max=223078, avg=105042.96, stdev=24491.74 01:02:04.635 clat percentiles (msec): 01:02:04.635 | 1.00th=[ 54], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 90], 01:02:04.635 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 97], 01:02:04.635 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 133], 01:02:04.635 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 213], 99.95th=[ 222], 01:02:04.635 | 99.99th=[ 224] 01:02:04.635 bw ( KiB/s): min=98304, max=183808, per=8.64%, avg=155483.80, stdev=26135.64, samples=20 01:02:04.635 iops : min= 384, max= 718, avg=607.30, stdev=102.03, samples=20 01:02:04.635 lat (msec) : 2=0.03%, 4=0.15%, 10=0.33%, 20=0.26%, 50=0.20% 01:02:04.636 lat (msec) : 100=63.67%, 250=35.37% 01:02:04.636 cpu : usr=1.24%, sys=2.38%, ctx=8764, majf=0, minf=1 01:02:04.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:02:04.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.636 issued rwts: total=0,6138,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.636 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.636 job7: (groupid=0, jobs=1): err= 0: pid=102946: Mon Jul 22 10:57:11 2024 01:02:04.636 write: IOPS=631, BW=158MiB/s (165MB/s)(1597MiB/10120msec); 0 zone resets 01:02:04.636 slat (usec): min=20, max=26370, avg=1507.53, stdev=2993.73 01:02:04.636 clat (msec): min=2, max=249, avg=99.86, stdev=44.35 01:02:04.636 lat (msec): min=3, max=249, avg=101.36, stdev=44.98 01:02:04.636 clat percentiles (msec): 01:02:04.636 | 1.00th=[ 14], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 66], 01:02:04.636 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 112], 01:02:04.636 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 169], 01:02:04.636 | 99.00th=[ 178], 99.50th=[ 192], 99.90th=[ 236], 99.95th=[ 243], 01:02:04.636 | 99.99th=[ 251] 01:02:04.636 bw ( KiB/s): min=96256, max=245760, per=9.00%, avg=161908.65, stdev=63983.61, samples=20 01:02:04.636 iops : min= 376, max= 960, avg=632.45, stdev=249.94, samples=20 01:02:04.636 lat (msec) : 4=0.14%, 10=0.30%, 20=0.94%, 50=2.11%, 100=56.21% 01:02:04.636 lat (msec) : 250=40.30% 01:02:04.636 cpu : usr=1.46%, sys=2.31%, ctx=8098, majf=0, minf=1 01:02:04.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:02:04.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.636 issued rwts: total=0,6387,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.636 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.636 job8: (groupid=0, jobs=1): err= 0: pid=102948: Mon Jul 22 10:57:11 2024 01:02:04.636 write: IOPS=1031, BW=258MiB/s (270MB/s)(2609MiB/10118msec); 0 zone resets 01:02:04.636 slat (usec): min=20, max=32742, avg=929.20, stdev=2345.46 01:02:04.636 clat (msec): min=2, max=257, avg=61.09, stdev=49.13 01:02:04.636 lat (msec): min=2, max=257, avg=62.02, stdev=49.85 01:02:04.636 clat percentiles (msec): 01:02:04.636 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 01:02:04.636 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 36], 01:02:04.636 | 70.00th=[ 62], 80.00th=[ 67], 90.00th=[ 159], 95.00th=[ 176], 01:02:04.636 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 249], 01:02:04.636 | 99.99th=[ 257] 01:02:04.636 bw ( KiB/s): min=86016, max=487936, per=14.75%, avg=265408.05, stdev=168609.87, samples=20 01:02:04.636 iops : min= 336, max= 1906, avg=1036.60, stdev=658.70, samples=20 01:02:04.636 lat (msec) : 4=0.12%, 10=0.46%, 20=1.17%, 50=63.17%, 100=17.55% 01:02:04.636 lat (msec) : 250=17.50%, 500=0.03% 01:02:04.636 cpu : usr=2.02%, sys=2.99%, ctx=13169, majf=0, minf=1 01:02:04.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:02:04.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.636 issued rwts: total=0,10436,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.636 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.636 job9: (groupid=0, jobs=1): err= 0: pid=102949: Mon Jul 22 10:57:11 2024 01:02:04.636 write: IOPS=439, BW=110MiB/s (115MB/s)(1113MiB/10118msec); 0 zone resets 01:02:04.636 slat (usec): min=22, max=51121, avg=2186.18, stdev=4059.08 01:02:04.636 clat (msec): min=5, max=266, avg=143.23, stdev=27.70 01:02:04.636 lat (msec): min=5, max=266, avg=145.42, stdev=27.83 01:02:04.636 clat percentiles (msec): 01:02:04.636 | 1.00th=[ 51], 5.00th=[ 101], 10.00th=[ 121], 20.00th=[ 126], 01:02:04.636 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 140], 60.00th=[ 150], 01:02:04.636 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 188], 01:02:04.636 | 99.00th=[ 199], 99.50th=[ 220], 99.90th=[ 259], 99.95th=[ 259], 01:02:04.636 | 99.99th=[ 268] 01:02:04.636 bw ( KiB/s): min=86016, max=146725, per=6.24%, avg=112350.20, stdev=16207.37, samples=20 01:02:04.636 iops : min= 336, max= 573, avg=438.80, stdev=63.25, samples=20 01:02:04.636 lat (msec) : 10=0.13%, 20=0.38%, 50=0.47%, 100=3.53%, 250=95.37% 01:02:04.636 lat (msec) : 500=0.11% 01:02:04.636 cpu : usr=1.07%, sys=1.62%, ctx=5321, majf=0, minf=1 01:02:04.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 01:02:04.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.636 issued rwts: total=0,4451,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.636 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.636 job10: (groupid=0, jobs=1): err= 0: pid=102950: Mon Jul 22 10:57:11 2024 01:02:04.636 write: IOPS=912, BW=228MiB/s (239MB/s)(2299MiB/10078msec); 0 zone resets 01:02:04.636 slat (usec): min=16, max=32238, avg=1070.05, stdev=2001.28 01:02:04.636 clat (msec): min=12, max=207, avg=69.06, stdev=25.65 01:02:04.636 lat (msec): min=12, max=207, avg=70.13, stdev=26.00 01:02:04.636 clat percentiles (msec): 01:02:04.636 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 59], 01:02:04.636 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 68], 01:02:04.636 | 70.00th=[ 69], 80.00th=[ 91], 90.00th=[ 94], 95.00th=[ 100], 01:02:04.636 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 207], 01:02:04.636 | 99.99th=[ 209] 01:02:04.636 bw ( KiB/s): min=120320, max=435712, per=12.99%, avg=233676.55, stdev=80356.81, samples=20 01:02:04.636 iops : min= 470, max= 1702, avg=912.70, stdev=313.91, samples=20 01:02:04.636 lat (msec) : 20=0.17%, 50=16.84%, 100=78.16%, 250=4.83% 01:02:04.636 cpu : usr=2.00%, sys=3.07%, ctx=11915, majf=0, minf=1 01:02:04.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:02:04.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:04.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:02:04.636 issued rwts: total=0,9195,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:04.636 latency : target=0, window=0, percentile=100.00%, depth=64 01:02:04.636 01:02:04.636 Run status group 0 (all jobs): 01:02:04.636 WRITE: bw=1757MiB/s (1843MB/s), 110MiB/s-258MiB/s (115MB/s-270MB/s), io=17.4GiB (18.7GB), run=10077-10127msec 01:02:04.636 01:02:04.636 Disk stats (read/write): 01:02:04.636 nvme0n1: ios=50/9382, merge=0/0, ticks=27/1212607, in_queue=1212634, util=97.92% 01:02:04.636 nvme10n1: ios=49/10420, merge=0/0, ticks=28/1218198, in_queue=1218226, util=97.95% 01:02:04.636 nvme1n1: ios=49/14702, merge=0/0, ticks=39/1218340, in_queue=1218379, util=98.09% 01:02:04.636 nvme2n1: ios=40/11882, merge=0/0, ticks=33/1216730, in_queue=1216763, util=98.20% 01:02:04.636 nvme3n1: ios=31/9773, merge=0/0, ticks=27/1215945, in_queue=1215972, util=98.26% 01:02:04.636 nvme4n1: ios=0/12245, merge=0/0, ticks=0/1217785, in_queue=1217785, util=98.31% 01:02:04.636 nvme5n1: ios=0/12145, merge=0/0, ticks=0/1216443, in_queue=1216443, util=98.27% 01:02:04.636 nvme6n1: ios=0/12647, merge=0/0, ticks=0/1214418, in_queue=1214418, util=98.38% 01:02:04.636 nvme7n1: ios=0/20760, merge=0/0, ticks=0/1212109, in_queue=1212109, util=98.76% 01:02:04.636 nvme8n1: ios=0/8783, merge=0/0, ticks=0/1213375, in_queue=1213375, util=98.77% 01:02:04.636 nvme9n1: ios=0/18250, merge=0/0, ticks=0/1215643, in_queue=1215643, util=98.68% 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:02:04.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 01:02:04.636 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 01:02:04.636 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.636 10:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 01:02:04.636 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.636 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:02:04.636 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 01:02:04.637 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 01:02:04.637 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 01:02:04.637 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 01:02:04.637 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 01:02:04.637 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 01:02:04.637 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 01:02:04.637 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.896 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 01:02:04.896 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 01:02:04.897 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 01:02:04.897 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 01:02:04.897 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:02:04.897 rmmod nvme_tcp 01:02:04.897 rmmod nvme_fabrics 01:02:04.897 rmmod nvme_keyring 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 102237 ']' 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 102237 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 102237 ']' 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 102237 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102237 01:02:05.156 killing process with pid 102237 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102237' 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 102237 01:02:05.156 10:57:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 102237 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:05.413 10:57:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:02:05.671 01:02:05.671 real 0m50.075s 01:02:05.671 user 2m52.874s 01:02:05.671 sys 0m25.789s 01:02:05.671 ************************************ 01:02:05.671 END TEST nvmf_multiconnection 01:02:05.671 ************************************ 01:02:05.671 10:57:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:05.671 10:57:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:02:05.671 10:57:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:02:05.671 10:57:13 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 01:02:05.671 10:57:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:05.671 10:57:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:05.671 10:57:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:05.671 ************************************ 01:02:05.671 START TEST nvmf_initiator_timeout 01:02:05.671 ************************************ 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 01:02:05.671 * Looking for test storage... 01:02:05.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:05.671 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:05.672 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 01:02:05.930 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:02:05.931 Cannot find device "nvmf_tgt_br" 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:02:05.931 Cannot find device "nvmf_tgt_br2" 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:02:05.931 Cannot find device "nvmf_tgt_br" 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:02:05.931 Cannot find device "nvmf_tgt_br2" 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:05.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:05.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:02:05.931 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:02:06.189 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:02:06.189 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:02:06.189 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:02:06.189 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:02:06.189 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:02:06.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:06.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 01:02:06.190 01:02:06.190 --- 10.0.0.2 ping statistics --- 01:02:06.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:06.190 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:02:06.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:06.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 01:02:06.190 01:02:06.190 --- 10.0.0.3 ping statistics --- 01:02:06.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:06.190 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:06.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:06.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:02:06.190 01:02:06.190 --- 10.0.0.1 ping statistics --- 01:02:06.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:06.190 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:02:06.190 10:57:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=103310 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 103310 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 103310 ']' 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:06.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:06.190 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:02:06.190 [2024-07-22 10:57:14.059172] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:02:06.190 [2024-07-22 10:57:14.059237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:06.448 [2024-07-22 10:57:14.178340] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:02:06.448 [2024-07-22 10:57:14.203483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:06.448 [2024-07-22 10:57:14.247458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:06.448 [2024-07-22 10:57:14.247509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:06.448 [2024-07-22 10:57:14.247519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:06.448 [2024-07-22 10:57:14.247527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:06.448 [2024-07-22 10:57:14.247534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:06.448 [2024-07-22 10:57:14.248022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:06.448 [2024-07-22 10:57:14.248331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:06.448 [2024-07-22 10:57:14.248333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:06.448 [2024-07-22 10:57:14.248106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:07.019 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.282 Malloc0 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:07.282 10:57:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.282 Delay0 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.282 [2024-07-22 10:57:15.008008] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:07.282 [2024-07-22 10:57:15.036134] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:02:07.282 10:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=103400 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 01:02:09.816 10:57:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 01:02:09.816 [global] 01:02:09.816 thread=1 01:02:09.816 invalidate=1 01:02:09.816 rw=write 01:02:09.816 time_based=1 01:02:09.816 runtime=60 01:02:09.816 ioengine=libaio 01:02:09.816 direct=1 01:02:09.816 bs=4096 01:02:09.816 iodepth=1 01:02:09.816 norandommap=0 01:02:09.816 numjobs=1 01:02:09.816 01:02:09.816 verify_dump=1 01:02:09.816 verify_backlog=512 01:02:09.816 verify_state_save=0 01:02:09.816 do_verify=1 01:02:09.816 verify=crc32c-intel 01:02:09.816 [job0] 01:02:09.816 filename=/dev/nvme0n1 01:02:09.816 Could not set queue depth (nvme0n1) 01:02:09.816 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:09.816 fio-3.35 01:02:09.816 Starting 1 thread 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:12.350 true 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:12.350 true 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:12.350 true 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:12.350 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:12.608 true 01:02:12.608 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:12.608 10:57:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 01:02:15.892 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 01:02:15.892 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:15.892 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:15.892 true 01:02:15.892 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:15.893 true 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:15.893 true 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:02:15.893 true 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 01:02:15.893 10:57:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 103400 01:03:12.112 01:03:12.112 job0: (groupid=0, jobs=1): err= 0: pid=103421: Mon Jul 22 10:58:17 2024 01:03:12.112 read: IOPS=1006, BW=4028KiB/s (4124kB/s)(236MiB/60000msec) 01:03:12.112 slat (usec): min=7, max=12369, avg=10.95, stdev=59.32 01:03:12.112 clat (usec): min=83, max=40530k, avg=836.46, stdev=164892.53 01:03:12.112 lat (usec): min=135, max=40530k, avg=847.40, stdev=164892.54 01:03:12.112 clat percentiles (usec): 01:03:12.112 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 01:03:12.112 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 169], 01:03:12.112 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 01:03:12.112 | 99.00th=[ 265], 99.50th=[ 334], 99.90th=[ 404], 99.95th=[ 441], 01:03:12.112 | 99.99th=[ 717] 01:03:12.112 write: IOPS=1008, BW=4036KiB/s (4133kB/s)(236MiB/60000msec); 0 zone resets 01:03:12.112 slat (usec): min=12, max=849, avg=15.50, stdev= 5.29 01:03:12.112 clat (usec): min=2, max=1350, avg=127.98, stdev=20.56 01:03:12.112 lat (usec): min=109, max=1364, avg=143.48, stdev=22.03 01:03:12.112 clat percentiles (usec): 01:03:12.112 | 1.00th=[ 104], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 114], 01:03:12.112 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 131], 01:03:12.112 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 01:03:12.112 | 99.00th=[ 174], 99.50th=[ 217], 99.90th=[ 297], 99.95th=[ 338], 01:03:12.112 | 99.99th=[ 619] 01:03:12.112 bw ( KiB/s): min= 4096, max=15912, per=100.00%, avg=12182.36, stdev=2023.48, samples=39 01:03:12.112 iops : min= 1024, max= 3978, avg=3045.59, stdev=505.87, samples=39 01:03:12.112 lat (usec) : 4=0.01%, 20=0.01%, 100=0.03%, 250=99.22%, 500=0.72% 01:03:12.112 lat (usec) : 750=0.02%, 1000=0.01% 01:03:12.112 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 01:03:12.112 cpu : usr=0.42%, sys=1.90%, ctx=120988, majf=0, minf=2 01:03:12.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:03:12.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:12.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:12.112 issued rwts: total=60416,60537,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:12.112 latency : target=0, window=0, percentile=100.00%, depth=1 01:03:12.112 01:03:12.112 Run status group 0 (all jobs): 01:03:12.112 READ: bw=4028KiB/s (4124kB/s), 4028KiB/s-4028KiB/s (4124kB/s-4124kB/s), io=236MiB (247MB), run=60000-60000msec 01:03:12.112 WRITE: bw=4036KiB/s (4133kB/s), 4036KiB/s-4036KiB/s (4133kB/s-4133kB/s), io=236MiB (248MB), run=60000-60000msec 01:03:12.112 01:03:12.112 Disk stats (read/write): 01:03:12.112 nvme0n1: ios=60262/60416, merge=0/0, ticks=10259/8154, in_queue=18413, util=99.78% 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:03:12.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 01:03:12.112 nvmf hotplug test: fio successful as expected 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:03:12.112 rmmod nvme_tcp 01:03:12.112 rmmod nvme_fabrics 01:03:12.112 rmmod nvme_keyring 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 01:03:12.112 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 103310 ']' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 103310 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 103310 ']' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 103310 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103310 01:03:12.113 killing process with pid 103310 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103310' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 103310 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 103310 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:12.113 10:58:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:12.113 10:58:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:03:12.113 01:03:12.113 real 1m4.608s 01:03:12.113 user 4m4.571s 01:03:12.113 sys 0m10.535s 01:03:12.113 10:58:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:12.113 ************************************ 01:03:12.113 END TEST nvmf_initiator_timeout 01:03:12.113 ************************************ 01:03:12.113 10:58:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:03:12.113 10:58:18 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 01:03:12.113 10:58:18 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:12.113 10:58:18 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:12.113 10:58:18 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 01:03:12.113 10:58:18 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:12.113 10:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:12.113 ************************************ 01:03:12.113 START TEST nvmf_multicontroller 01:03:12.113 ************************************ 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:03:12.113 * Looking for test storage... 01:03:12.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:12.113 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:03:12.114 Cannot find device "nvmf_tgt_br" 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:03:12.114 Cannot find device "nvmf_tgt_br2" 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:03:12.114 Cannot find device "nvmf_tgt_br" 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:03:12.114 Cannot find device "nvmf_tgt_br2" 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:12.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:12.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:03:12.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:12.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 01:03:12.114 01:03:12.114 --- 10.0.0.2 ping statistics --- 01:03:12.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:12.114 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:03:12.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:12.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 01:03:12.114 01:03:12.114 --- 10.0.0.3 ping statistics --- 01:03:12.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:12.114 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:12.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:12.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 01:03:12.114 01:03:12.114 --- 10.0.0.1 ping statistics --- 01:03:12.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:12.114 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=104261 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 104261 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104261 ']' 01:03:12.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:12.114 10:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 [2024-07-22 10:58:18.874296] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:12.114 [2024-07-22 10:58:18.874378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:12.114 [2024-07-22 10:58:18.993770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:12.114 [2024-07-22 10:58:19.018665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:12.114 [2024-07-22 10:58:19.067299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:12.114 [2024-07-22 10:58:19.067563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:12.114 [2024-07-22 10:58:19.067711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:12.114 [2024-07-22 10:58:19.067757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:12.114 [2024-07-22 10:58:19.067782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:12.114 [2024-07-22 10:58:19.067972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:12.114 [2024-07-22 10:58:19.068419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:12.114 [2024-07-22 10:58:19.068420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 [2024-07-22 10:58:19.812264] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 Malloc0 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.114 [2024-07-22 10:58:19.880803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:12.114 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.115 [2024-07-22 10:58:19.892722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.115 Malloc1 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=104313 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 104313 /var/tmp/bdevperf.sock 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104313 ']' 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:03:12.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:12.115 10:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.051 10:58:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:13.051 10:58:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 01:03:13.051 10:58:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 01:03:13.051 10:58:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.051 10:58:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.310 NVMe0n1 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:13.310 1 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.310 2024/07/22 10:58:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:03:13.310 request: 01:03:13.310 { 01:03:13.310 "method": "bdev_nvme_attach_controller", 01:03:13.310 "params": { 01:03:13.310 "name": "NVMe0", 01:03:13.310 "trtype": "tcp", 01:03:13.310 "traddr": "10.0.0.2", 01:03:13.310 "adrfam": "ipv4", 01:03:13.310 "trsvcid": "4420", 01:03:13.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:13.310 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 01:03:13.310 "hostaddr": "10.0.0.2", 01:03:13.310 "hostsvcid": "60000", 01:03:13.310 "prchk_reftag": false, 01:03:13.310 "prchk_guard": false, 01:03:13.310 "hdgst": false, 01:03:13.310 "ddgst": false 01:03:13.310 } 01:03:13.310 } 01:03:13.310 Got JSON-RPC error response 01:03:13.310 GoRPCClient: error on JSON-RPC call 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.310 2024/07/22 10:58:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:03:13.310 request: 01:03:13.310 { 01:03:13.310 "method": "bdev_nvme_attach_controller", 01:03:13.310 "params": { 01:03:13.310 "name": "NVMe0", 01:03:13.310 "trtype": "tcp", 01:03:13.310 "traddr": "10.0.0.2", 01:03:13.310 "adrfam": "ipv4", 01:03:13.310 "trsvcid": "4420", 01:03:13.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:03:13.310 "hostaddr": "10.0.0.2", 01:03:13.310 "hostsvcid": "60000", 01:03:13.310 "prchk_reftag": false, 01:03:13.310 "prchk_guard": false, 01:03:13.310 "hdgst": false, 01:03:13.310 "ddgst": false 01:03:13.310 } 01:03:13.310 } 01:03:13.310 Got JSON-RPC error response 01:03:13.310 GoRPCClient: error on JSON-RPC call 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:03:13.310 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.311 2024/07/22 10:58:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 01:03:13.311 request: 01:03:13.311 { 01:03:13.311 "method": "bdev_nvme_attach_controller", 01:03:13.311 "params": { 01:03:13.311 "name": "NVMe0", 01:03:13.311 "trtype": "tcp", 01:03:13.311 "traddr": "10.0.0.2", 01:03:13.311 "adrfam": "ipv4", 01:03:13.311 "trsvcid": "4420", 01:03:13.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:13.311 "hostaddr": "10.0.0.2", 01:03:13.311 "hostsvcid": "60000", 01:03:13.311 "prchk_reftag": false, 01:03:13.311 "prchk_guard": false, 01:03:13.311 "hdgst": false, 01:03:13.311 "ddgst": false, 01:03:13.311 "multipath": "disable" 01:03:13.311 } 01:03:13.311 } 01:03:13.311 Got JSON-RPC error response 01:03:13.311 GoRPCClient: error on JSON-RPC call 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.311 2024/07/22 10:58:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:03:13.311 request: 01:03:13.311 { 01:03:13.311 "method": "bdev_nvme_attach_controller", 01:03:13.311 "params": { 01:03:13.311 "name": "NVMe0", 01:03:13.311 "trtype": "tcp", 01:03:13.311 "traddr": "10.0.0.2", 01:03:13.311 "adrfam": "ipv4", 01:03:13.311 "trsvcid": "4420", 01:03:13.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:13.311 "hostaddr": "10.0.0.2", 01:03:13.311 "hostsvcid": "60000", 01:03:13.311 "prchk_reftag": false, 01:03:13.311 "prchk_guard": false, 01:03:13.311 "hdgst": false, 01:03:13.311 "ddgst": false, 01:03:13.311 "multipath": "failover" 01:03:13.311 } 01:03:13.311 } 01:03:13.311 Got JSON-RPC error response 01:03:13.311 GoRPCClient: error on JSON-RPC call 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.311 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.311 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.570 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 01:03:13.570 10:58:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:03:14.505 0 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 104313 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104313 ']' 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104313 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104313 01:03:14.764 killing process with pid 104313 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104313' 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104313 01:03:14.764 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104313 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 01:03:15.024 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:03:15.024 [2024-07-22 10:58:20.018905] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:15.024 [2024-07-22 10:58:20.019000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104313 ] 01:03:15.024 [2024-07-22 10:58:20.137764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:15.024 [2024-07-22 10:58:20.164073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:15.024 [2024-07-22 10:58:20.212112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:15.024 [2024-07-22 10:58:21.290816] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name c3711251-7a9b-42ea-b57d-a27d66795cc0 already exists 01:03:15.024 [2024-07-22 10:58:21.290882] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:c3711251-7a9b-42ea-b57d-a27d66795cc0 alias for bdev NVMe1n1 01:03:15.024 [2024-07-22 10:58:21.290897] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 01:03:15.024 Running I/O for 1 seconds... 01:03:15.024 01:03:15.024 Latency(us) 01:03:15.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:15.024 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 01:03:15.024 NVMe0n1 : 1.00 20907.56 81.67 0.00 0.00 6108.40 2960.96 15475.97 01:03:15.024 =================================================================================================================== 01:03:15.024 Total : 20907.56 81.67 0.00 0.00 6108.40 2960.96 15475.97 01:03:15.024 Received shutdown signal, test time was about 1.000000 seconds 01:03:15.024 01:03:15.024 Latency(us) 01:03:15.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:15.024 =================================================================================================================== 01:03:15.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:15.024 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 01:03:15.024 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:03:15.024 rmmod nvme_tcp 01:03:15.024 rmmod nvme_fabrics 01:03:15.024 rmmod nvme_keyring 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 104261 ']' 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 104261 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104261 ']' 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104261 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:15.282 10:58:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104261 01:03:15.282 killing process with pid 104261 01:03:15.282 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:03:15.282 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:03:15.282 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104261' 01:03:15.282 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104261 01:03:15.282 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104261 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:03:15.539 01:03:15.539 real 0m5.106s 01:03:15.539 user 0m15.569s 01:03:15.539 sys 0m1.293s 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:15.539 10:58:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:15.539 ************************************ 01:03:15.539 END TEST nvmf_multicontroller 01:03:15.539 ************************************ 01:03:15.539 10:58:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:03:15.539 10:58:23 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:03:15.539 10:58:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:03:15.539 10:58:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:15.539 10:58:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:15.539 ************************************ 01:03:15.539 START TEST nvmf_aer 01:03:15.539 ************************************ 01:03:15.539 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:03:15.797 * Looking for test storage... 01:03:15.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:15.797 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:03:15.798 Cannot find device "nvmf_tgt_br" 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:03:15.798 Cannot find device "nvmf_tgt_br2" 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:03:15.798 Cannot find device "nvmf_tgt_br" 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:03:15.798 Cannot find device "nvmf_tgt_br2" 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:15.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 01:03:15.798 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:16.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:03:16.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:16.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 01:03:16.057 01:03:16.057 --- 10.0.0.2 ping statistics --- 01:03:16.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:16.057 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:03:16.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:16.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 01:03:16.057 01:03:16.057 --- 10.0.0.3 ping statistics --- 01:03:16.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:16.057 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:16.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:16.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:03:16.057 01:03:16.057 --- 10.0.0.1 ping statistics --- 01:03:16.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:16.057 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:16.057 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=104568 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 104568 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 104568 ']' 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:16.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:16.314 10:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:16.314 [2024-07-22 10:58:24.040839] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:16.314 [2024-07-22 10:58:24.040920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:16.314 [2024-07-22 10:58:24.163238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:16.314 [2024-07-22 10:58:24.173828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:16.314 [2024-07-22 10:58:24.224032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:16.314 [2024-07-22 10:58:24.224095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:16.314 [2024-07-22 10:58:24.224105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:16.314 [2024-07-22 10:58:24.224113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:16.314 [2024-07-22 10:58:24.224120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:16.314 [2024-07-22 10:58:24.224358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:16.315 [2024-07-22 10:58:24.224541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:16.315 [2024-07-22 10:58:24.224997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:16.315 [2024-07-22 10:58:24.224998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.247 [2024-07-22 10:58:24.952598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:17.247 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.248 10:58:24 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 01:03:17.248 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.248 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.248 Malloc0 01:03:17.248 10:58:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.248 10:58:24 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.248 [2024-07-22 10:58:25.033848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.248 [ 01:03:17.248 { 01:03:17.248 "allow_any_host": true, 01:03:17.248 "hosts": [], 01:03:17.248 "listen_addresses": [], 01:03:17.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:17.248 "subtype": "Discovery" 01:03:17.248 }, 01:03:17.248 { 01:03:17.248 "allow_any_host": true, 01:03:17.248 "hosts": [], 01:03:17.248 "listen_addresses": [ 01:03:17.248 { 01:03:17.248 "adrfam": "IPv4", 01:03:17.248 "traddr": "10.0.0.2", 01:03:17.248 "trsvcid": "4420", 01:03:17.248 "trtype": "TCP" 01:03:17.248 } 01:03:17.248 ], 01:03:17.248 "max_cntlid": 65519, 01:03:17.248 "max_namespaces": 2, 01:03:17.248 "min_cntlid": 1, 01:03:17.248 "model_number": "SPDK bdev Controller", 01:03:17.248 "namespaces": [ 01:03:17.248 { 01:03:17.248 "bdev_name": "Malloc0", 01:03:17.248 "name": "Malloc0", 01:03:17.248 "nguid": "5AA12E45646D48EC977A4A0F4B69712B", 01:03:17.248 "nsid": 1, 01:03:17.248 "uuid": "5aa12e45-646d-48ec-977a-4a0f4b69712b" 01:03:17.248 } 01:03:17.248 ], 01:03:17.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:17.248 "serial_number": "SPDK00000000000001", 01:03:17.248 "subtype": "NVMe" 01:03:17.248 } 01:03:17.248 ] 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=104623 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 01:03:17.248 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.506 Malloc1 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.506 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.506 Asynchronous Event Request test 01:03:17.506 Attaching to 10.0.0.2 01:03:17.506 Attached to 10.0.0.2 01:03:17.506 Registering asynchronous event callbacks... 01:03:17.506 Starting namespace attribute notice tests for all controllers... 01:03:17.506 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 01:03:17.506 aer_cb - Changed Namespace 01:03:17.506 Cleaning up... 01:03:17.506 [ 01:03:17.506 { 01:03:17.506 "allow_any_host": true, 01:03:17.506 "hosts": [], 01:03:17.506 "listen_addresses": [], 01:03:17.506 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:17.506 "subtype": "Discovery" 01:03:17.506 }, 01:03:17.506 { 01:03:17.507 "allow_any_host": true, 01:03:17.507 "hosts": [], 01:03:17.507 "listen_addresses": [ 01:03:17.507 { 01:03:17.507 "adrfam": "IPv4", 01:03:17.507 "traddr": "10.0.0.2", 01:03:17.507 "trsvcid": "4420", 01:03:17.507 "trtype": "TCP" 01:03:17.507 } 01:03:17.507 ], 01:03:17.507 "max_cntlid": 65519, 01:03:17.507 "max_namespaces": 2, 01:03:17.507 "min_cntlid": 1, 01:03:17.507 "model_number": "SPDK bdev Controller", 01:03:17.507 "namespaces": [ 01:03:17.507 { 01:03:17.507 "bdev_name": "Malloc0", 01:03:17.507 "name": "Malloc0", 01:03:17.507 "nguid": "5AA12E45646D48EC977A4A0F4B69712B", 01:03:17.507 "nsid": 1, 01:03:17.507 "uuid": "5aa12e45-646d-48ec-977a-4a0f4b69712b" 01:03:17.507 }, 01:03:17.507 { 01:03:17.507 "bdev_name": "Malloc1", 01:03:17.507 "name": "Malloc1", 01:03:17.507 "nguid": "40FCDD543CDE4850A5765B80CA8A0B95", 01:03:17.507 "nsid": 2, 01:03:17.507 "uuid": "40fcdd54-3cde-4850-a576-5b80ca8a0b95" 01:03:17.507 } 01:03:17.507 ], 01:03:17.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:17.507 "serial_number": "SPDK00000000000001", 01:03:17.507 "subtype": "NVMe" 01:03:17.507 } 01:03:17.507 ] 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 104623 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.507 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:03:17.765 rmmod nvme_tcp 01:03:17.765 rmmod nvme_fabrics 01:03:17.765 rmmod nvme_keyring 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 104568 ']' 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 104568 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 104568 ']' 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 104568 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104568 01:03:17.765 killing process with pid 104568 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104568' 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 104568 01:03:17.765 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 104568 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:03:18.024 01:03:18.024 real 0m2.486s 01:03:18.024 user 0m6.415s 01:03:18.024 sys 0m0.797s 01:03:18.024 ************************************ 01:03:18.024 END TEST nvmf_aer 01:03:18.024 ************************************ 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:18.024 10:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:18.024 10:58:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:03:18.024 10:58:25 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:03:18.024 10:58:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:03:18.024 10:58:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:18.024 10:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:18.024 ************************************ 01:03:18.024 START TEST nvmf_async_init 01:03:18.024 ************************************ 01:03:18.024 10:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:03:18.320 * Looking for test storage... 01:03:18.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3911cc20cd9240b99b6c6c91146e4266 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:03:18.321 Cannot find device "nvmf_tgt_br" 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:03:18.321 Cannot find device "nvmf_tgt_br2" 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:03:18.321 Cannot find device "nvmf_tgt_br" 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:03:18.321 Cannot find device "nvmf_tgt_br2" 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 01:03:18.321 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:18.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:18.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:03:18.586 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:03:18.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:18.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 01:03:18.844 01:03:18.844 --- 10.0.0.2 ping statistics --- 01:03:18.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.844 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:03:18.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:18.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 01:03:18.844 01:03:18.844 --- 10.0.0.3 ping statistics --- 01:03:18.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.844 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:18.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:18.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 01:03:18.844 01:03:18.844 --- 10.0.0.1 ping statistics --- 01:03:18.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.844 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=104791 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 104791 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 104791 ']' 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:18.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:18.844 10:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:18.844 [2024-07-22 10:58:26.674209] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:18.844 [2024-07-22 10:58:26.674340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:19.102 [2024-07-22 10:58:26.796957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:19.102 [2024-07-22 10:58:26.810178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:19.102 [2024-07-22 10:58:26.857865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:19.102 [2024-07-22 10:58:26.857921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:19.102 [2024-07-22 10:58:26.857930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:19.102 [2024-07-22 10:58:26.857938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:19.102 [2024-07-22 10:58:26.857945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:19.102 [2024-07-22 10:58:26.857970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:19.668 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.669 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.669 [2024-07-22 10:58:27.599680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.927 null0 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3911cc20cd9240b99b6c6c91146e4266 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:19.927 [2024-07-22 10:58:27.659735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:19.927 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.194 nvme0n1 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.194 [ 01:03:20.194 { 01:03:20.194 "aliases": [ 01:03:20.194 "3911cc20-cd92-40b9-9b6c-6c91146e4266" 01:03:20.194 ], 01:03:20.194 "assigned_rate_limits": { 01:03:20.194 "r_mbytes_per_sec": 0, 01:03:20.194 "rw_ios_per_sec": 0, 01:03:20.194 "rw_mbytes_per_sec": 0, 01:03:20.194 "w_mbytes_per_sec": 0 01:03:20.194 }, 01:03:20.194 "block_size": 512, 01:03:20.194 "claimed": false, 01:03:20.194 "driver_specific": { 01:03:20.194 "mp_policy": "active_passive", 01:03:20.194 "nvme": [ 01:03:20.194 { 01:03:20.194 "ctrlr_data": { 01:03:20.194 "ana_reporting": false, 01:03:20.194 "cntlid": 1, 01:03:20.194 "firmware_revision": "24.09", 01:03:20.194 "model_number": "SPDK bdev Controller", 01:03:20.194 "multi_ctrlr": true, 01:03:20.194 "oacs": { 01:03:20.194 "firmware": 0, 01:03:20.194 "format": 0, 01:03:20.194 "ns_manage": 0, 01:03:20.194 "security": 0 01:03:20.194 }, 01:03:20.194 "serial_number": "00000000000000000000", 01:03:20.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:20.194 "vendor_id": "0x8086" 01:03:20.194 }, 01:03:20.194 "ns_data": { 01:03:20.194 "can_share": true, 01:03:20.194 "id": 1 01:03:20.194 }, 01:03:20.194 "trid": { 01:03:20.194 "adrfam": "IPv4", 01:03:20.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:20.194 "traddr": "10.0.0.2", 01:03:20.194 "trsvcid": "4420", 01:03:20.194 "trtype": "TCP" 01:03:20.194 }, 01:03:20.194 "vs": { 01:03:20.194 "nvme_version": "1.3" 01:03:20.194 } 01:03:20.194 } 01:03:20.194 ] 01:03:20.194 }, 01:03:20.194 "memory_domains": [ 01:03:20.194 { 01:03:20.194 "dma_device_id": "system", 01:03:20.194 "dma_device_type": 1 01:03:20.194 } 01:03:20.194 ], 01:03:20.194 "name": "nvme0n1", 01:03:20.194 "num_blocks": 2097152, 01:03:20.194 "product_name": "NVMe disk", 01:03:20.194 "supported_io_types": { 01:03:20.194 "abort": true, 01:03:20.194 "compare": true, 01:03:20.194 "compare_and_write": true, 01:03:20.194 "copy": true, 01:03:20.194 "flush": true, 01:03:20.194 "get_zone_info": false, 01:03:20.194 "nvme_admin": true, 01:03:20.194 "nvme_io": true, 01:03:20.194 "nvme_io_md": false, 01:03:20.194 "nvme_iov_md": false, 01:03:20.194 "read": true, 01:03:20.194 "reset": true, 01:03:20.194 "seek_data": false, 01:03:20.194 "seek_hole": false, 01:03:20.194 "unmap": false, 01:03:20.194 "write": true, 01:03:20.194 "write_zeroes": true, 01:03:20.194 "zcopy": false, 01:03:20.194 "zone_append": false, 01:03:20.194 "zone_management": false 01:03:20.194 }, 01:03:20.194 "uuid": "3911cc20-cd92-40b9-9b6c-6c91146e4266", 01:03:20.194 "zoned": false 01:03:20.194 } 01:03:20.194 ] 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.194 10:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.194 [2024-07-22 10:58:27.947308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:03:20.194 [2024-07-22 10:58:27.947414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e64d0 (9): Bad file descriptor 01:03:20.194 [2024-07-22 10:58:28.079481] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.194 [ 01:03:20.194 { 01:03:20.194 "aliases": [ 01:03:20.194 "3911cc20-cd92-40b9-9b6c-6c91146e4266" 01:03:20.194 ], 01:03:20.194 "assigned_rate_limits": { 01:03:20.194 "r_mbytes_per_sec": 0, 01:03:20.194 "rw_ios_per_sec": 0, 01:03:20.194 "rw_mbytes_per_sec": 0, 01:03:20.194 "w_mbytes_per_sec": 0 01:03:20.194 }, 01:03:20.194 "block_size": 512, 01:03:20.194 "claimed": false, 01:03:20.194 "driver_specific": { 01:03:20.194 "mp_policy": "active_passive", 01:03:20.194 "nvme": [ 01:03:20.194 { 01:03:20.194 "ctrlr_data": { 01:03:20.194 "ana_reporting": false, 01:03:20.194 "cntlid": 2, 01:03:20.194 "firmware_revision": "24.09", 01:03:20.194 "model_number": "SPDK bdev Controller", 01:03:20.194 "multi_ctrlr": true, 01:03:20.194 "oacs": { 01:03:20.194 "firmware": 0, 01:03:20.194 "format": 0, 01:03:20.194 "ns_manage": 0, 01:03:20.194 "security": 0 01:03:20.194 }, 01:03:20.194 "serial_number": "00000000000000000000", 01:03:20.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:20.194 "vendor_id": "0x8086" 01:03:20.194 }, 01:03:20.194 "ns_data": { 01:03:20.194 "can_share": true, 01:03:20.194 "id": 1 01:03:20.194 }, 01:03:20.194 "trid": { 01:03:20.194 "adrfam": "IPv4", 01:03:20.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:20.194 "traddr": "10.0.0.2", 01:03:20.194 "trsvcid": "4420", 01:03:20.194 "trtype": "TCP" 01:03:20.194 }, 01:03:20.194 "vs": { 01:03:20.194 "nvme_version": "1.3" 01:03:20.194 } 01:03:20.194 } 01:03:20.194 ] 01:03:20.194 }, 01:03:20.194 "memory_domains": [ 01:03:20.194 { 01:03:20.194 "dma_device_id": "system", 01:03:20.194 "dma_device_type": 1 01:03:20.194 } 01:03:20.194 ], 01:03:20.194 "name": "nvme0n1", 01:03:20.194 "num_blocks": 2097152, 01:03:20.194 "product_name": "NVMe disk", 01:03:20.194 "supported_io_types": { 01:03:20.194 "abort": true, 01:03:20.194 "compare": true, 01:03:20.194 "compare_and_write": true, 01:03:20.194 "copy": true, 01:03:20.194 "flush": true, 01:03:20.194 "get_zone_info": false, 01:03:20.194 "nvme_admin": true, 01:03:20.194 "nvme_io": true, 01:03:20.194 "nvme_io_md": false, 01:03:20.194 "nvme_iov_md": false, 01:03:20.194 "read": true, 01:03:20.194 "reset": true, 01:03:20.194 "seek_data": false, 01:03:20.194 "seek_hole": false, 01:03:20.194 "unmap": false, 01:03:20.194 "write": true, 01:03:20.194 "write_zeroes": true, 01:03:20.194 "zcopy": false, 01:03:20.194 "zone_append": false, 01:03:20.194 "zone_management": false 01:03:20.194 }, 01:03:20.194 "uuid": "3911cc20-cd92-40b9-9b6c-6c91146e4266", 01:03:20.194 "zoned": false 01:03:20.194 } 01:03:20.194 ] 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.194 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lrjCbyQ8RN 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lrjCbyQ8RN 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 01:03:20.452 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.453 [2024-07-22 10:58:28.171091] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:03:20.453 [2024-07-22 10:58:28.171295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lrjCbyQ8RN 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.453 [2024-07-22 10:58:28.183055] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lrjCbyQ8RN 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.453 [2024-07-22 10:58:28.195044] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:03:20.453 [2024-07-22 10:58:28.195123] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:03:20.453 nvme0n1 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.453 [ 01:03:20.453 { 01:03:20.453 "aliases": [ 01:03:20.453 "3911cc20-cd92-40b9-9b6c-6c91146e4266" 01:03:20.453 ], 01:03:20.453 "assigned_rate_limits": { 01:03:20.453 "r_mbytes_per_sec": 0, 01:03:20.453 "rw_ios_per_sec": 0, 01:03:20.453 "rw_mbytes_per_sec": 0, 01:03:20.453 "w_mbytes_per_sec": 0 01:03:20.453 }, 01:03:20.453 "block_size": 512, 01:03:20.453 "claimed": false, 01:03:20.453 "driver_specific": { 01:03:20.453 "mp_policy": "active_passive", 01:03:20.453 "nvme": [ 01:03:20.453 { 01:03:20.453 "ctrlr_data": { 01:03:20.453 "ana_reporting": false, 01:03:20.453 "cntlid": 3, 01:03:20.453 "firmware_revision": "24.09", 01:03:20.453 "model_number": "SPDK bdev Controller", 01:03:20.453 "multi_ctrlr": true, 01:03:20.453 "oacs": { 01:03:20.453 "firmware": 0, 01:03:20.453 "format": 0, 01:03:20.453 "ns_manage": 0, 01:03:20.453 "security": 0 01:03:20.453 }, 01:03:20.453 "serial_number": "00000000000000000000", 01:03:20.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:20.453 "vendor_id": "0x8086" 01:03:20.453 }, 01:03:20.453 "ns_data": { 01:03:20.453 "can_share": true, 01:03:20.453 "id": 1 01:03:20.453 }, 01:03:20.453 "trid": { 01:03:20.453 "adrfam": "IPv4", 01:03:20.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:20.453 "traddr": "10.0.0.2", 01:03:20.453 "trsvcid": "4421", 01:03:20.453 "trtype": "TCP" 01:03:20.453 }, 01:03:20.453 "vs": { 01:03:20.453 "nvme_version": "1.3" 01:03:20.453 } 01:03:20.453 } 01:03:20.453 ] 01:03:20.453 }, 01:03:20.453 "memory_domains": [ 01:03:20.453 { 01:03:20.453 "dma_device_id": "system", 01:03:20.453 "dma_device_type": 1 01:03:20.453 } 01:03:20.453 ], 01:03:20.453 "name": "nvme0n1", 01:03:20.453 "num_blocks": 2097152, 01:03:20.453 "product_name": "NVMe disk", 01:03:20.453 "supported_io_types": { 01:03:20.453 "abort": true, 01:03:20.453 "compare": true, 01:03:20.453 "compare_and_write": true, 01:03:20.453 "copy": true, 01:03:20.453 "flush": true, 01:03:20.453 "get_zone_info": false, 01:03:20.453 "nvme_admin": true, 01:03:20.453 "nvme_io": true, 01:03:20.453 "nvme_io_md": false, 01:03:20.453 "nvme_iov_md": false, 01:03:20.453 "read": true, 01:03:20.453 "reset": true, 01:03:20.453 "seek_data": false, 01:03:20.453 "seek_hole": false, 01:03:20.453 "unmap": false, 01:03:20.453 "write": true, 01:03:20.453 "write_zeroes": true, 01:03:20.453 "zcopy": false, 01:03:20.453 "zone_append": false, 01:03:20.453 "zone_management": false 01:03:20.453 }, 01:03:20.453 "uuid": "3911cc20-cd92-40b9-9b6c-6c91146e4266", 01:03:20.453 "zoned": false 01:03:20.453 } 01:03:20.453 ] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.lrjCbyQ8RN 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 01:03:20.453 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:03:20.711 rmmod nvme_tcp 01:03:20.711 rmmod nvme_fabrics 01:03:20.711 rmmod nvme_keyring 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 104791 ']' 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 104791 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 104791 ']' 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 104791 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104791 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:20.711 killing process with pid 104791 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104791' 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 104791 01:03:20.711 [2024-07-22 10:58:28.493771] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:03:20.711 [2024-07-22 10:58:28.493816] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:03:20.711 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 104791 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:03:20.969 01:03:20.969 real 0m2.819s 01:03:20.969 user 0m2.396s 01:03:20.969 sys 0m0.774s 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:20.969 10:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:20.969 ************************************ 01:03:20.969 END TEST nvmf_async_init 01:03:20.969 ************************************ 01:03:20.969 10:58:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:03:20.969 10:58:28 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:03:20.969 10:58:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:03:20.969 10:58:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:20.969 10:58:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:20.969 ************************************ 01:03:20.969 START TEST dma 01:03:20.969 ************************************ 01:03:20.969 10:58:28 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:03:21.230 * Looking for test storage... 01:03:21.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:21.230 10:58:28 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:21.230 10:58:28 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:21.230 10:58:28 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:21.230 10:58:28 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:21.230 10:58:28 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.230 10:58:28 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.230 10:58:28 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.230 10:58:28 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 01:03:21.230 10:58:28 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:21.230 10:58:28 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:21.230 10:58:28 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 01:03:21.230 10:58:28 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 01:03:21.230 01:03:21.230 real 0m0.164s 01:03:21.230 user 0m0.077s 01:03:21.230 sys 0m0.097s 01:03:21.230 10:58:28 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:21.230 10:58:28 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 01:03:21.230 ************************************ 01:03:21.230 END TEST dma 01:03:21.230 ************************************ 01:03:21.230 10:58:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:03:21.230 10:58:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:03:21.230 10:58:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:03:21.230 10:58:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:21.230 10:58:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:21.230 ************************************ 01:03:21.230 START TEST nvmf_identify 01:03:21.230 ************************************ 01:03:21.230 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:03:21.489 * Looking for test storage... 01:03:21.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:21.489 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:03:21.490 Cannot find device "nvmf_tgt_br" 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:03:21.490 Cannot find device "nvmf_tgt_br2" 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:03:21.490 Cannot find device "nvmf_tgt_br" 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:03:21.490 Cannot find device "nvmf_tgt_br2" 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:21.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:21.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:21.490 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:03:21.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:21.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 01:03:21.748 01:03:21.748 --- 10.0.0.2 ping statistics --- 01:03:21.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:21.748 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:03:21.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:21.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 01:03:21.748 01:03:21.748 --- 10.0.0.3 ping statistics --- 01:03:21.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:21.748 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:21.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:21.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 01:03:21.748 01:03:21.748 --- 10.0.0.1 ping statistics --- 01:03:21.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:21.748 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:21.748 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=105066 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 105066 01:03:22.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 105066 ']' 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:22.007 10:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.007 [2024-07-22 10:58:29.779795] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:22.007 [2024-07-22 10:58:29.779878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:22.007 [2024-07-22 10:58:29.900882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:22.007 [2024-07-22 10:58:29.923930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:22.264 [2024-07-22 10:58:29.973908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:22.264 [2024-07-22 10:58:29.973969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:22.264 [2024-07-22 10:58:29.973978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:22.264 [2024-07-22 10:58:29.973986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:22.264 [2024-07-22 10:58:29.973993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:22.264 [2024-07-22 10:58:29.974223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:22.264 [2024-07-22 10:58:29.974570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:22.264 [2024-07-22 10:58:29.975124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:22.264 [2024-07-22 10:58:29.975122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.832 [2024-07-22 10:58:30.639922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.832 Malloc0 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:22.832 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:23.092 [2024-07-22 10:58:30.769928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:23.093 [ 01:03:23.093 { 01:03:23.093 "allow_any_host": true, 01:03:23.093 "hosts": [], 01:03:23.093 "listen_addresses": [ 01:03:23.093 { 01:03:23.093 "adrfam": "IPv4", 01:03:23.093 "traddr": "10.0.0.2", 01:03:23.093 "trsvcid": "4420", 01:03:23.093 "trtype": "TCP" 01:03:23.093 } 01:03:23.093 ], 01:03:23.093 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:23.093 "subtype": "Discovery" 01:03:23.093 }, 01:03:23.093 { 01:03:23.093 "allow_any_host": true, 01:03:23.093 "hosts": [], 01:03:23.093 "listen_addresses": [ 01:03:23.093 { 01:03:23.093 "adrfam": "IPv4", 01:03:23.093 "traddr": "10.0.0.2", 01:03:23.093 "trsvcid": "4420", 01:03:23.093 "trtype": "TCP" 01:03:23.093 } 01:03:23.093 ], 01:03:23.093 "max_cntlid": 65519, 01:03:23.093 "max_namespaces": 32, 01:03:23.093 "min_cntlid": 1, 01:03:23.093 "model_number": "SPDK bdev Controller", 01:03:23.093 "namespaces": [ 01:03:23.093 { 01:03:23.093 "bdev_name": "Malloc0", 01:03:23.093 "eui64": "ABCDEF0123456789", 01:03:23.093 "name": "Malloc0", 01:03:23.093 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:03:23.093 "nsid": 1, 01:03:23.093 "uuid": "95e02e9e-6420-44ea-98ad-1264ce540481" 01:03:23.093 } 01:03:23.093 ], 01:03:23.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:23.093 "serial_number": "SPDK00000000000001", 01:03:23.093 "subtype": "NVMe" 01:03:23.093 } 01:03:23.093 ] 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:23.093 10:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:03:23.093 [2024-07-22 10:58:30.846484] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:23.093 [2024-07-22 10:58:30.846543] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105119 ] 01:03:23.093 [2024-07-22 10:58:30.965886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:23.093 [2024-07-22 10:58:30.981927] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 01:03:23.093 [2024-07-22 10:58:30.982001] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:03:23.093 [2024-07-22 10:58:30.982006] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:03:23.093 [2024-07-22 10:58:30.982023] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:03:23.093 [2024-07-22 10:58:30.982031] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:03:23.093 [2024-07-22 10:58:30.982169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 01:03:23.093 [2024-07-22 10:58:30.982207] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf96d00 0 01:03:23.093 [2024-07-22 10:58:30.997290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:03:23.093 [2024-07-22 10:58:30.997326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:03:23.093 [2024-07-22 10:58:30.997332] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:03:23.093 [2024-07-22 10:58:30.997336] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:03:23.093 [2024-07-22 10:58:30.997385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:30.997392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:30.997397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.093 [2024-07-22 10:58:30.997435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:03:23.093 [2024-07-22 10:58:30.997479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.093 [2024-07-22 10:58:31.005297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.093 [2024-07-22 10:58:31.005325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.093 [2024-07-22 10:58:31.005331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.093 [2024-07-22 10:58:31.005351] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:03:23.093 [2024-07-22 10:58:31.005361] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 01:03:23.093 [2024-07-22 10:58:31.005368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 01:03:23.093 [2024-07-22 10:58:31.005393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.093 [2024-07-22 10:58:31.005419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.093 [2024-07-22 10:58:31.005457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.093 [2024-07-22 10:58:31.005526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.093 [2024-07-22 10:58:31.005533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.093 [2024-07-22 10:58:31.005538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.093 [2024-07-22 10:58:31.005549] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 01:03:23.093 [2024-07-22 10:58:31.005557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 01:03:23.093 [2024-07-22 10:58:31.005565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.093 [2024-07-22 10:58:31.005583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.093 [2024-07-22 10:58:31.005599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.093 [2024-07-22 10:58:31.005641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.093 [2024-07-22 10:58:31.005648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.093 [2024-07-22 10:58:31.005653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.093 [2024-07-22 10:58:31.005664] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 01:03:23.093 [2024-07-22 10:58:31.005682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 01:03:23.093 [2024-07-22 10:58:31.005692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.093 [2024-07-22 10:58:31.005707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.093 [2024-07-22 10:58:31.005727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.093 [2024-07-22 10:58:31.005768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.093 [2024-07-22 10:58:31.005776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.093 [2024-07-22 10:58:31.005782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.093 [2024-07-22 10:58:31.005796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:03:23.093 [2024-07-22 10:58:31.005808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.093 [2024-07-22 10:58:31.005830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.093 [2024-07-22 10:58:31.005852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.093 [2024-07-22 10:58:31.005893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.093 [2024-07-22 10:58:31.005902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.093 [2024-07-22 10:58:31.005906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.005911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.093 [2024-07-22 10:58:31.005918] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 01:03:23.093 [2024-07-22 10:58:31.005927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 01:03:23.093 [2024-07-22 10:58:31.005936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:03:23.093 [2024-07-22 10:58:31.006044] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 01:03:23.093 [2024-07-22 10:58:31.006055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:03:23.093 [2024-07-22 10:58:31.006068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.006075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.093 [2024-07-22 10:58:31.006081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.093 [2024-07-22 10:58:31.006088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.093 [2024-07-22 10:58:31.006106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.093 [2024-07-22 10:58:31.006146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.093 [2024-07-22 10:58:31.006153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.093 [2024-07-22 10:58:31.006157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.094 [2024-07-22 10:58:31.006168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:03:23.094 [2024-07-22 10:58:31.006180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.094 [2024-07-22 10:58:31.006222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.094 [2024-07-22 10:58:31.006276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.094 [2024-07-22 10:58:31.006284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.094 [2024-07-22 10:58:31.006289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.094 [2024-07-22 10:58:31.006303] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:03:23.094 [2024-07-22 10:58:31.006311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 01:03:23.094 [2024-07-22 10:58:31.006322] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 01:03:23.094 [2024-07-22 10:58:31.006338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 01:03:23.094 [2024-07-22 10:58:31.006353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.094 [2024-07-22 10:58:31.006387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.094 [2024-07-22 10:58:31.006466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.094 [2024-07-22 10:58:31.006474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.094 [2024-07-22 10:58:31.006479] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006484] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf96d00): datao=0, datal=4096, cccid=0 01:03:23.094 [2024-07-22 10:58:31.006490] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd500) on tqpair(0xf96d00): expected_datao=0, payload_size=4096 01:03:23.094 [2024-07-22 10:58:31.006496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006505] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006512] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.094 [2024-07-22 10:58:31.006532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.094 [2024-07-22 10:58:31.006539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.094 [2024-07-22 10:58:31.006555] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 01:03:23.094 [2024-07-22 10:58:31.006561] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 01:03:23.094 [2024-07-22 10:58:31.006568] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 01:03:23.094 [2024-07-22 10:58:31.006576] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 01:03:23.094 [2024-07-22 10:58:31.006584] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 01:03:23.094 [2024-07-22 10:58:31.006591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 01:03:23.094 [2024-07-22 10:58:31.006600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 01:03:23.094 [2024-07-22 10:58:31.006608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:03:23.094 [2024-07-22 10:58:31.006643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.094 [2024-07-22 10:58:31.006697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.094 [2024-07-22 10:58:31.006706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.094 [2024-07-22 10:58:31.006712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.094 [2024-07-22 10:58:31.006733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.094 [2024-07-22 10:58:31.006760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.094 [2024-07-22 10:58:31.006787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.094 [2024-07-22 10:58:31.006810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.094 [2024-07-22 10:58:31.006834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 01:03:23.094 [2024-07-22 10:58:31.006846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:03:23.094 [2024-07-22 10:58:31.006855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.006861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.006871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.094 [2024-07-22 10:58:31.006890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd500, cid 0, qid 0 01:03:23.094 [2024-07-22 10:58:31.006897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd680, cid 1, qid 0 01:03:23.094 [2024-07-22 10:58:31.006904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd800, cid 2, qid 0 01:03:23.094 [2024-07-22 10:58:31.006911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.094 [2024-07-22 10:58:31.006918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfddb00, cid 4, qid 0 01:03:23.094 [2024-07-22 10:58:31.006982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.094 [2024-07-22 10:58:31.006991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.094 [2024-07-22 10:58:31.006998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfddb00) on tqpair=0xf96d00 01:03:23.094 [2024-07-22 10:58:31.007015] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 01:03:23.094 [2024-07-22 10:58:31.007022] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 01:03:23.094 [2024-07-22 10:58:31.007034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.007050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.094 [2024-07-22 10:58:31.007067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfddb00, cid 4, qid 0 01:03:23.094 [2024-07-22 10:58:31.007118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.094 [2024-07-22 10:58:31.007127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.094 [2024-07-22 10:58:31.007133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007139] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf96d00): datao=0, datal=4096, cccid=4 01:03:23.094 [2024-07-22 10:58:31.007149] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfddb00) on tqpair(0xf96d00): expected_datao=0, payload_size=4096 01:03:23.094 [2024-07-22 10:58:31.007155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007163] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007167] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.094 [2024-07-22 10:58:31.007186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.094 [2024-07-22 10:58:31.007192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfddb00) on tqpair=0xf96d00 01:03:23.094 [2024-07-22 10:58:31.007217] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 01:03:23.094 [2024-07-22 10:58:31.007246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.007264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.094 [2024-07-22 10:58:31.007287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.094 [2024-07-22 10:58:31.007297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf96d00) 01:03:23.094 [2024-07-22 10:58:31.007305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.094 [2024-07-22 10:58:31.007331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfddb00, cid 4, qid 0 01:03:23.094 [2024-07-22 10:58:31.007340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfddc80, cid 5, qid 0 01:03:23.095 [2024-07-22 10:58:31.007417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.095 [2024-07-22 10:58:31.007427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.095 [2024-07-22 10:58:31.007434] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.095 [2024-07-22 10:58:31.007440] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf96d00): datao=0, datal=1024, cccid=4 01:03:23.095 [2024-07-22 10:58:31.007448] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfddb00) on tqpair(0xf96d00): expected_datao=0, payload_size=1024 01:03:23.095 [2024-07-22 10:58:31.007455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.095 [2024-07-22 10:58:31.007462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.095 [2024-07-22 10:58:31.007467] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.095 [2024-07-22 10:58:31.007474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.095 [2024-07-22 10:58:31.007482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.095 [2024-07-22 10:58:31.007487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.095 [2024-07-22 10:58:31.007495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfddc80) on tqpair=0xf96d00 01:03:23.358 [2024-07-22 10:58:31.048348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.358 [2024-07-22 10:58:31.048380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.358 [2024-07-22 10:58:31.048385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfddb00) on tqpair=0xf96d00 01:03:23.358 [2024-07-22 10:58:31.048413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf96d00) 01:03:23.358 [2024-07-22 10:58:31.048430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.358 [2024-07-22 10:58:31.048467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfddb00, cid 4, qid 0 01:03:23.358 [2024-07-22 10:58:31.048530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.358 [2024-07-22 10:58:31.048536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.358 [2024-07-22 10:58:31.048541] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048544] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf96d00): datao=0, datal=3072, cccid=4 01:03:23.358 [2024-07-22 10:58:31.048549] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfddb00) on tqpair(0xf96d00): expected_datao=0, payload_size=3072 01:03:23.358 [2024-07-22 10:58:31.048554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048562] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048566] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.358 [2024-07-22 10:58:31.048579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.358 [2024-07-22 10:58:31.048583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfddb00) on tqpair=0xf96d00 01:03:23.358 [2024-07-22 10:58:31.048594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf96d00) 01:03:23.358 [2024-07-22 10:58:31.048605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.358 [2024-07-22 10:58:31.048625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfddb00, cid 4, qid 0 01:03:23.358 [2024-07-22 10:58:31.048678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.358 [2024-07-22 10:58:31.048684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.358 [2024-07-22 10:58:31.048688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048691] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf96d00): datao=0, datal=8, cccid=4 01:03:23.358 [2024-07-22 10:58:31.048696] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfddb00) on tqpair(0xf96d00): expected_datao=0, payload_size=8 01:03:23.358 [2024-07-22 10:58:31.048701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048706] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.048710] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.096322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.358 [2024-07-22 10:58:31.096355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.358 [2024-07-22 10:58:31.096360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.358 [2024-07-22 10:58:31.096366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfddb00) on tqpair=0xf96d00 01:03:23.358 ===================================================== 01:03:23.358 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 01:03:23.358 ===================================================== 01:03:23.358 Controller Capabilities/Features 01:03:23.358 ================================ 01:03:23.358 Vendor ID: 0000 01:03:23.358 Subsystem Vendor ID: 0000 01:03:23.358 Serial Number: .................... 01:03:23.358 Model Number: ........................................ 01:03:23.358 Firmware Version: 24.09 01:03:23.358 Recommended Arb Burst: 0 01:03:23.358 IEEE OUI Identifier: 00 00 00 01:03:23.358 Multi-path I/O 01:03:23.358 May have multiple subsystem ports: No 01:03:23.358 May have multiple controllers: No 01:03:23.358 Associated with SR-IOV VF: No 01:03:23.358 Max Data Transfer Size: 131072 01:03:23.358 Max Number of Namespaces: 0 01:03:23.358 Max Number of I/O Queues: 1024 01:03:23.358 NVMe Specification Version (VS): 1.3 01:03:23.358 NVMe Specification Version (Identify): 1.3 01:03:23.358 Maximum Queue Entries: 128 01:03:23.358 Contiguous Queues Required: Yes 01:03:23.358 Arbitration Mechanisms Supported 01:03:23.358 Weighted Round Robin: Not Supported 01:03:23.358 Vendor Specific: Not Supported 01:03:23.358 Reset Timeout: 15000 ms 01:03:23.358 Doorbell Stride: 4 bytes 01:03:23.359 NVM Subsystem Reset: Not Supported 01:03:23.359 Command Sets Supported 01:03:23.359 NVM Command Set: Supported 01:03:23.359 Boot Partition: Not Supported 01:03:23.359 Memory Page Size Minimum: 4096 bytes 01:03:23.359 Memory Page Size Maximum: 4096 bytes 01:03:23.359 Persistent Memory Region: Not Supported 01:03:23.359 Optional Asynchronous Events Supported 01:03:23.359 Namespace Attribute Notices: Not Supported 01:03:23.359 Firmware Activation Notices: Not Supported 01:03:23.359 ANA Change Notices: Not Supported 01:03:23.359 PLE Aggregate Log Change Notices: Not Supported 01:03:23.359 LBA Status Info Alert Notices: Not Supported 01:03:23.359 EGE Aggregate Log Change Notices: Not Supported 01:03:23.359 Normal NVM Subsystem Shutdown event: Not Supported 01:03:23.359 Zone Descriptor Change Notices: Not Supported 01:03:23.359 Discovery Log Change Notices: Supported 01:03:23.359 Controller Attributes 01:03:23.359 128-bit Host Identifier: Not Supported 01:03:23.359 Non-Operational Permissive Mode: Not Supported 01:03:23.359 NVM Sets: Not Supported 01:03:23.359 Read Recovery Levels: Not Supported 01:03:23.359 Endurance Groups: Not Supported 01:03:23.359 Predictable Latency Mode: Not Supported 01:03:23.359 Traffic Based Keep ALive: Not Supported 01:03:23.359 Namespace Granularity: Not Supported 01:03:23.359 SQ Associations: Not Supported 01:03:23.359 UUID List: Not Supported 01:03:23.359 Multi-Domain Subsystem: Not Supported 01:03:23.359 Fixed Capacity Management: Not Supported 01:03:23.359 Variable Capacity Management: Not Supported 01:03:23.359 Delete Endurance Group: Not Supported 01:03:23.359 Delete NVM Set: Not Supported 01:03:23.359 Extended LBA Formats Supported: Not Supported 01:03:23.359 Flexible Data Placement Supported: Not Supported 01:03:23.359 01:03:23.359 Controller Memory Buffer Support 01:03:23.359 ================================ 01:03:23.359 Supported: No 01:03:23.359 01:03:23.359 Persistent Memory Region Support 01:03:23.359 ================================ 01:03:23.359 Supported: No 01:03:23.359 01:03:23.359 Admin Command Set Attributes 01:03:23.359 ============================ 01:03:23.359 Security Send/Receive: Not Supported 01:03:23.359 Format NVM: Not Supported 01:03:23.359 Firmware Activate/Download: Not Supported 01:03:23.359 Namespace Management: Not Supported 01:03:23.359 Device Self-Test: Not Supported 01:03:23.359 Directives: Not Supported 01:03:23.359 NVMe-MI: Not Supported 01:03:23.359 Virtualization Management: Not Supported 01:03:23.359 Doorbell Buffer Config: Not Supported 01:03:23.359 Get LBA Status Capability: Not Supported 01:03:23.359 Command & Feature Lockdown Capability: Not Supported 01:03:23.359 Abort Command Limit: 1 01:03:23.359 Async Event Request Limit: 4 01:03:23.359 Number of Firmware Slots: N/A 01:03:23.359 Firmware Slot 1 Read-Only: N/A 01:03:23.359 Firmware Activation Without Reset: N/A 01:03:23.359 Multiple Update Detection Support: N/A 01:03:23.359 Firmware Update Granularity: No Information Provided 01:03:23.359 Per-Namespace SMART Log: No 01:03:23.359 Asymmetric Namespace Access Log Page: Not Supported 01:03:23.359 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:03:23.359 Command Effects Log Page: Not Supported 01:03:23.359 Get Log Page Extended Data: Supported 01:03:23.359 Telemetry Log Pages: Not Supported 01:03:23.359 Persistent Event Log Pages: Not Supported 01:03:23.359 Supported Log Pages Log Page: May Support 01:03:23.359 Commands Supported & Effects Log Page: Not Supported 01:03:23.359 Feature Identifiers & Effects Log Page:May Support 01:03:23.359 NVMe-MI Commands & Effects Log Page: May Support 01:03:23.359 Data Area 4 for Telemetry Log: Not Supported 01:03:23.359 Error Log Page Entries Supported: 128 01:03:23.359 Keep Alive: Not Supported 01:03:23.359 01:03:23.359 NVM Command Set Attributes 01:03:23.359 ========================== 01:03:23.359 Submission Queue Entry Size 01:03:23.359 Max: 1 01:03:23.359 Min: 1 01:03:23.359 Completion Queue Entry Size 01:03:23.359 Max: 1 01:03:23.359 Min: 1 01:03:23.359 Number of Namespaces: 0 01:03:23.359 Compare Command: Not Supported 01:03:23.359 Write Uncorrectable Command: Not Supported 01:03:23.359 Dataset Management Command: Not Supported 01:03:23.359 Write Zeroes Command: Not Supported 01:03:23.359 Set Features Save Field: Not Supported 01:03:23.359 Reservations: Not Supported 01:03:23.359 Timestamp: Not Supported 01:03:23.359 Copy: Not Supported 01:03:23.359 Volatile Write Cache: Not Present 01:03:23.359 Atomic Write Unit (Normal): 1 01:03:23.359 Atomic Write Unit (PFail): 1 01:03:23.359 Atomic Compare & Write Unit: 1 01:03:23.359 Fused Compare & Write: Supported 01:03:23.359 Scatter-Gather List 01:03:23.359 SGL Command Set: Supported 01:03:23.359 SGL Keyed: Supported 01:03:23.359 SGL Bit Bucket Descriptor: Not Supported 01:03:23.359 SGL Metadata Pointer: Not Supported 01:03:23.359 Oversized SGL: Not Supported 01:03:23.359 SGL Metadata Address: Not Supported 01:03:23.359 SGL Offset: Supported 01:03:23.359 Transport SGL Data Block: Not Supported 01:03:23.359 Replay Protected Memory Block: Not Supported 01:03:23.359 01:03:23.359 Firmware Slot Information 01:03:23.359 ========================= 01:03:23.359 Active slot: 0 01:03:23.359 01:03:23.359 01:03:23.359 Error Log 01:03:23.359 ========= 01:03:23.359 01:03:23.359 Active Namespaces 01:03:23.359 ================= 01:03:23.359 Discovery Log Page 01:03:23.359 ================== 01:03:23.359 Generation Counter: 2 01:03:23.359 Number of Records: 2 01:03:23.359 Record Format: 0 01:03:23.359 01:03:23.359 Discovery Log Entry 0 01:03:23.359 ---------------------- 01:03:23.359 Transport Type: 3 (TCP) 01:03:23.359 Address Family: 1 (IPv4) 01:03:23.359 Subsystem Type: 3 (Current Discovery Subsystem) 01:03:23.359 Entry Flags: 01:03:23.359 Duplicate Returned Information: 1 01:03:23.359 Explicit Persistent Connection Support for Discovery: 1 01:03:23.359 Transport Requirements: 01:03:23.359 Secure Channel: Not Required 01:03:23.359 Port ID: 0 (0x0000) 01:03:23.359 Controller ID: 65535 (0xffff) 01:03:23.359 Admin Max SQ Size: 128 01:03:23.359 Transport Service Identifier: 4420 01:03:23.359 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:03:23.359 Transport Address: 10.0.0.2 01:03:23.359 Discovery Log Entry 1 01:03:23.359 ---------------------- 01:03:23.359 Transport Type: 3 (TCP) 01:03:23.359 Address Family: 1 (IPv4) 01:03:23.359 Subsystem Type: 2 (NVM Subsystem) 01:03:23.359 Entry Flags: 01:03:23.359 Duplicate Returned Information: 0 01:03:23.359 Explicit Persistent Connection Support for Discovery: 0 01:03:23.359 Transport Requirements: 01:03:23.359 Secure Channel: Not Required 01:03:23.359 Port ID: 0 (0x0000) 01:03:23.359 Controller ID: 65535 (0xffff) 01:03:23.359 Admin Max SQ Size: 128 01:03:23.359 Transport Service Identifier: 4420 01:03:23.359 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:03:23.359 Transport Address: 10.0.0.2 [2024-07-22 10:58:31.096477] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 01:03:23.359 [2024-07-22 10:58:31.096488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd500) on tqpair=0xf96d00 01:03:23.359 [2024-07-22 10:58:31.096496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.359 [2024-07-22 10:58:31.096502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd680) on tqpair=0xf96d00 01:03:23.359 [2024-07-22 10:58:31.096507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.359 [2024-07-22 10:58:31.096512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd800) on tqpair=0xf96d00 01:03:23.359 [2024-07-22 10:58:31.096517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.359 [2024-07-22 10:58:31.096523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.359 [2024-07-22 10:58:31.096527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.359 [2024-07-22 10:58:31.096540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.359 [2024-07-22 10:58:31.096544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.359 [2024-07-22 10:58:31.096548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.359 [2024-07-22 10:58:31.096559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.359 [2024-07-22 10:58:31.096584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.359 [2024-07-22 10:58:31.096655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.359 [2024-07-22 10:58:31.096665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.359 [2024-07-22 10:58:31.096672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.359 [2024-07-22 10:58:31.096677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.359 [2024-07-22 10:58:31.096685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.359 [2024-07-22 10:58:31.096690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.359 [2024-07-22 10:58:31.096695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.359 [2024-07-22 10:58:31.096703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.096725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.096790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.096798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.096803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.096808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.096818] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 01:03:23.360 [2024-07-22 10:58:31.096824] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 01:03:23.360 [2024-07-22 10:58:31.096834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.096839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.096844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.096852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.096873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.096916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.096922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.096926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.096930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.096939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.096944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.096950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.096960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.096981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.097887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.097904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.097947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.097956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.097963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.097978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.097990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.098000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.098021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.098062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.098072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.098078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.098085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.360 [2024-07-22 10:58:31.098098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.098104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.360 [2024-07-22 10:58:31.098111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.360 [2024-07-22 10:58:31.098119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.360 [2024-07-22 10:58:31.098138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.360 [2024-07-22 10:58:31.098182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.360 [2024-07-22 10:58:31.098189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.360 [2024-07-22 10:58:31.098194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.098901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.098921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.098966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.098973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.098977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.098984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.098996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.099020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.099038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.099080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.099087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.099092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.099110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.099134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.099154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.099194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.099201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.099205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.099224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.099246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.099276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.099311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.099318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.099323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.099342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.099366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.099386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.099434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.099443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.099447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.099462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.361 [2024-07-22 10:58:31.099482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.361 [2024-07-22 10:58:31.099500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.361 [2024-07-22 10:58:31.099549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.361 [2024-07-22 10:58:31.099558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.361 [2024-07-22 10:58:31.099565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.361 [2024-07-22 10:58:31.099580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.361 [2024-07-22 10:58:31.099591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.099600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.099618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.099659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.099667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.099673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.099692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.099710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.099728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.099772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.099779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.099784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.099799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.099821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.099840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.099882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.099892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.099898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.099916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.099926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.099933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.099951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.099993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.362 [2024-07-22 10:58:31.100687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.362 [2024-07-22 10:58:31.100701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.362 [2024-07-22 10:58:31.100713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.362 [2024-07-22 10:58:31.100723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.362 [2024-07-22 10:58:31.100740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.362 [2024-07-22 10:58:31.100782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.362 [2024-07-22 10:58:31.100791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.100797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.100804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.100817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.100822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.100827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.100834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.100853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.100890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.100897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.100902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.100907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.100916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.100923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.100929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.100939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.100960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.101923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.101929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.101935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.101953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.101966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.101976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.363 [2024-07-22 10:58:31.101992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.363 [2024-07-22 10:58:31.102034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.363 [2024-07-22 10:58:31.102041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.363 [2024-07-22 10:58:31.102046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.102051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.363 [2024-07-22 10:58:31.102060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.102065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.363 [2024-07-22 10:58:31.102071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.363 [2024-07-22 10:58:31.102081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.102899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.102917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.102960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.102967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.102972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.102988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.102999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.103006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.103023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.103065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.103074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.103080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.103099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.103117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.103135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.103179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.103189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.103194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.103211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.103234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.103250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.103301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.103308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.103313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.103332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.103355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.364 [2024-07-22 10:58:31.103376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.364 [2024-07-22 10:58:31.103421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.364 [2024-07-22 10:58:31.103429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.364 [2024-07-22 10:58:31.103434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.364 [2024-07-22 10:58:31.103448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.364 [2024-07-22 10:58:31.103459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.364 [2024-07-22 10:58:31.103468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.103490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.103533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.103540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.103544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.103562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.103585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.103601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.103647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.103656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.103662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.103680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.103698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.103716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.103760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.103768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.103772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.103792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.103815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.103833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.103874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.103881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.103886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.103900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.103913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.103923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.103944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.103986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.103995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.104001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.104021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.104041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.104057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.104099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.104108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.104115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.104131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.104149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.104171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.104217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.104226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.104233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.104252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.104259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.108288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf96d00) 01:03:23.365 [2024-07-22 10:58:31.108313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.365 [2024-07-22 10:58:31.108352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd980, cid 3, qid 0 01:03:23.365 [2024-07-22 10:58:31.108412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.365 [2024-07-22 10:58:31.108421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.365 [2024-07-22 10:58:31.108426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.365 [2024-07-22 10:58:31.108431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd980) on tqpair=0xf96d00 01:03:23.365 [2024-07-22 10:58:31.108441] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 01:03:23.365 01:03:23.365 10:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:03:23.365 [2024-07-22 10:58:31.158443] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:23.365 [2024-07-22 10:58:31.158499] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105127 ] 01:03:23.365 [2024-07-22 10:58:31.279885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:23.629 [2024-07-22 10:58:31.295903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 01:03:23.629 [2024-07-22 10:58:31.295976] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:03:23.629 [2024-07-22 10:58:31.295982] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:03:23.629 [2024-07-22 10:58:31.296000] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:03:23.629 [2024-07-22 10:58:31.296007] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:03:23.629 [2024-07-22 10:58:31.296134] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 01:03:23.629 [2024-07-22 10:58:31.296171] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1585d00 0 01:03:23.629 [2024-07-22 10:58:31.311289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:03:23.629 [2024-07-22 10:58:31.311313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:03:23.629 [2024-07-22 10:58:31.311318] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:03:23.629 [2024-07-22 10:58:31.311322] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:03:23.629 [2024-07-22 10:58:31.311369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.311375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.311380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.311394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:03:23.629 [2024-07-22 10:58:31.311426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.319290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.319319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.319324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.319341] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:03:23.629 [2024-07-22 10:58:31.319350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 01:03:23.629 [2024-07-22 10:58:31.319356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 01:03:23.629 [2024-07-22 10:58:31.319377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.319397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.629 [2024-07-22 10:58:31.319425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.319485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.319491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.319494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.319504] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 01:03:23.629 [2024-07-22 10:58:31.319511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 01:03:23.629 [2024-07-22 10:58:31.319518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.319532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.629 [2024-07-22 10:58:31.319546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.319588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.319594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.319597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.319607] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 01:03:23.629 [2024-07-22 10:58:31.319619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 01:03:23.629 [2024-07-22 10:58:31.319628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.319648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.629 [2024-07-22 10:58:31.319668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.319711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.319720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.319726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.319739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:03:23.629 [2024-07-22 10:58:31.319753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.319775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.629 [2024-07-22 10:58:31.319792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.319832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.319840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.319846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.319853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.319859] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 01:03:23.629 [2024-07-22 10:58:31.319867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 01:03:23.629 [2024-07-22 10:58:31.319878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:03:23.629 [2024-07-22 10:58:31.319986] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 01:03:23.629 [2024-07-22 10:58:31.319993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:03:23.629 [2024-07-22 10:58:31.320003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.320023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.629 [2024-07-22 10:58:31.320046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.320089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.320098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.320105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.320116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:03:23.629 [2024-07-22 10:58:31.320126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.320145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.629 [2024-07-22 10:58:31.320165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.629 [2024-07-22 10:58:31.320206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.629 [2024-07-22 10:58:31.320213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.629 [2024-07-22 10:58:31.320218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.629 [2024-07-22 10:58:31.320228] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:03:23.629 [2024-07-22 10:58:31.320234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 01:03:23.629 [2024-07-22 10:58:31.320245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 01:03:23.629 [2024-07-22 10:58:31.320261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 01:03:23.629 [2024-07-22 10:58:31.320287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.629 [2024-07-22 10:58:31.320292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.629 [2024-07-22 10:58:31.320301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.630 [2024-07-22 10:58:31.320324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.630 [2024-07-22 10:58:31.320398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.630 [2024-07-22 10:58:31.320407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.630 [2024-07-22 10:58:31.320413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=4096, cccid=0 01:03:23.630 [2024-07-22 10:58:31.320427] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15cc500) on tqpair(0x1585d00): expected_datao=0, payload_size=4096 01:03:23.630 [2024-07-22 10:58:31.320433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320442] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320449] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.630 [2024-07-22 10:58:31.320470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.630 [2024-07-22 10:58:31.320476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.630 [2024-07-22 10:58:31.320494] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 01:03:23.630 [2024-07-22 10:58:31.320500] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 01:03:23.630 [2024-07-22 10:58:31.320506] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 01:03:23.630 [2024-07-22 10:58:31.320512] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 01:03:23.630 [2024-07-22 10:58:31.320520] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 01:03:23.630 [2024-07-22 10:58:31.320528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.320539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.320549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.320572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:03:23.630 [2024-07-22 10:58:31.320594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.630 [2024-07-22 10:58:31.320643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.630 [2024-07-22 10:58:31.320652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.630 [2024-07-22 10:58:31.320657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.630 [2024-07-22 10:58:31.320674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.320695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.630 [2024-07-22 10:58:31.320705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.320726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.630 [2024-07-22 10:58:31.320736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.320754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.630 [2024-07-22 10:58:31.320763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.320784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.630 [2024-07-22 10:58:31.320792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.320805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.320813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.320825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.630 [2024-07-22 10:58:31.320848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc500, cid 0, qid 0 01:03:23.630 [2024-07-22 10:58:31.320856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc680, cid 1, qid 0 01:03:23.630 [2024-07-22 10:58:31.320862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc800, cid 2, qid 0 01:03:23.630 [2024-07-22 10:58:31.320868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.630 [2024-07-22 10:58:31.320873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.630 [2024-07-22 10:58:31.320941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.630 [2024-07-22 10:58:31.320949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.630 [2024-07-22 10:58:31.320954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.320960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.630 [2024-07-22 10:58:31.320972] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 01:03:23.630 [2024-07-22 10:58:31.320979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.320988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.320996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.321003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.321022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:03:23.630 [2024-07-22 10:58:31.321043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.630 [2024-07-22 10:58:31.321088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.630 [2024-07-22 10:58:31.321097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.630 [2024-07-22 10:58:31.321103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.630 [2024-07-22 10:58:31.321173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.321187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.321196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.321209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.630 [2024-07-22 10:58:31.321232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.630 [2024-07-22 10:58:31.321295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.630 [2024-07-22 10:58:31.321304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.630 [2024-07-22 10:58:31.321308] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321313] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=4096, cccid=4 01:03:23.630 [2024-07-22 10:58:31.321319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ccb00) on tqpair(0x1585d00): expected_datao=0, payload_size=4096 01:03:23.630 [2024-07-22 10:58:31.321325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321333] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321338] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.630 [2024-07-22 10:58:31.321356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.630 [2024-07-22 10:58:31.321361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.630 [2024-07-22 10:58:31.321382] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 01:03:23.630 [2024-07-22 10:58:31.321398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.321409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 01:03:23.630 [2024-07-22 10:58:31.321417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.630 [2024-07-22 10:58:31.321431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.630 [2024-07-22 10:58:31.321455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.630 [2024-07-22 10:58:31.321515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.630 [2024-07-22 10:58:31.321526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.630 [2024-07-22 10:58:31.321532] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321538] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=4096, cccid=4 01:03:23.630 [2024-07-22 10:58:31.321547] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ccb00) on tqpair(0x1585d00): expected_datao=0, payload_size=4096 01:03:23.630 [2024-07-22 10:58:31.321554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321561] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321565] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.630 [2024-07-22 10:58:31.321576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.630 [2024-07-22 10:58:31.321584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.630 [2024-07-22 10:58:31.321590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.321609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.321646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.321670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.631 [2024-07-22 10:58:31.321726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.631 [2024-07-22 10:58:31.321736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.631 [2024-07-22 10:58:31.321742] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321748] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=4096, cccid=4 01:03:23.631 [2024-07-22 10:58:31.321755] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ccb00) on tqpair(0x1585d00): expected_datao=0, payload_size=4096 01:03:23.631 [2024-07-22 10:58:31.321763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321771] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321776] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.631 [2024-07-22 10:58:31.321794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.631 [2024-07-22 10:58:31.321799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.321812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321870] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 01:03:23.631 [2024-07-22 10:58:31.321876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 01:03:23.631 [2024-07-22 10:58:31.321883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 01:03:23.631 [2024-07-22 10:58:31.321906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.321924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.321934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.321945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.321952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:03:23.631 [2024-07-22 10:58:31.321980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.631 [2024-07-22 10:58:31.321990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccc80, cid 5, qid 0 01:03:23.631 [2024-07-22 10:58:31.322038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.631 [2024-07-22 10:58:31.322047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.631 [2024-07-22 10:58:31.322053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.322070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.631 [2024-07-22 10:58:31.322078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.631 [2024-07-22 10:58:31.322084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccc80) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.322101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccc80, cid 5, qid 0 01:03:23.631 [2024-07-22 10:58:31.322182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.631 [2024-07-22 10:58:31.322192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.631 [2024-07-22 10:58:31.322198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccc80) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.322219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccc80, cid 5, qid 0 01:03:23.631 [2024-07-22 10:58:31.322341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.631 [2024-07-22 10:58:31.322356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.631 [2024-07-22 10:58:31.322362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccc80) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.322377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccc80, cid 5, qid 0 01:03:23.631 [2024-07-22 10:58:31.322463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.631 [2024-07-22 10:58:31.322471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.631 [2024-07-22 10:58:31.322477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccc80) on tqpair=0x1585d00 01:03:23.631 [2024-07-22 10:58:31.322503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1585d00) 01:03:23.631 [2024-07-22 10:58:31.322593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.631 [2024-07-22 10:58:31.322614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccc80, cid 5, qid 0 01:03:23.631 [2024-07-22 10:58:31.322623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccb00, cid 4, qid 0 01:03:23.631 [2024-07-22 10:58:31.322631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cce00, cid 6, qid 0 01:03:23.631 [2024-07-22 10:58:31.322638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccf80, cid 7, qid 0 01:03:23.631 [2024-07-22 10:58:31.322746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.631 [2024-07-22 10:58:31.322755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.631 [2024-07-22 10:58:31.322760] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=8192, cccid=5 01:03:23.631 [2024-07-22 10:58:31.322775] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ccc80) on tqpair(0x1585d00): expected_datao=0, payload_size=8192 01:03:23.631 [2024-07-22 10:58:31.322781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322797] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322803] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.631 [2024-07-22 10:58:31.322817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.631 [2024-07-22 10:58:31.322823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322829] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=512, cccid=4 01:03:23.631 [2024-07-22 10:58:31.322837] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ccb00) on tqpair(0x1585d00): expected_datao=0, payload_size=512 01:03:23.631 [2024-07-22 10:58:31.322845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322853] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322858] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.631 [2024-07-22 10:58:31.322871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.631 [2024-07-22 10:58:31.322877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.631 [2024-07-22 10:58:31.322882] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=512, cccid=6 01:03:23.631 [2024-07-22 10:58:31.322891] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15cce00) on tqpair(0x1585d00): expected_datao=0, payload_size=512 01:03:23.632 [2024-07-22 10:58:31.322898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322905] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322909] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:23.632 [2024-07-22 10:58:31.322922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:23.632 [2024-07-22 10:58:31.322926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322931] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1585d00): datao=0, datal=4096, cccid=7 01:03:23.632 [2024-07-22 10:58:31.322937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ccf80) on tqpair(0x1585d00): expected_datao=0, payload_size=4096 01:03:23.632 [2024-07-22 10:58:31.322944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322953] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322959] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.632 [2024-07-22 10:58:31.322976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.632 [2024-07-22 10:58:31.322983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.322988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccc80) on tqpair=0x1585d00 01:03:23.632 [2024-07-22 10:58:31.323004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.632 [2024-07-22 10:58:31.323012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.632 [2024-07-22 10:58:31.323018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.323025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccb00) on tqpair=0x1585d00 01:03:23.632 [2024-07-22 10:58:31.323041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.632 [2024-07-22 10:58:31.323048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.632 [2024-07-22 10:58:31.323053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.323057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cce00) on tqpair=0x1585d00 01:03:23.632 [2024-07-22 10:58:31.323065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.632 [2024-07-22 10:58:31.323072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.632 [2024-07-22 10:58:31.323077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.632 [2024-07-22 10:58:31.323083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccf80) on tqpair=0x1585d00 01:03:23.632 ===================================================== 01:03:23.632 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:23.632 ===================================================== 01:03:23.632 Controller Capabilities/Features 01:03:23.632 ================================ 01:03:23.632 Vendor ID: 8086 01:03:23.632 Subsystem Vendor ID: 8086 01:03:23.632 Serial Number: SPDK00000000000001 01:03:23.632 Model Number: SPDK bdev Controller 01:03:23.632 Firmware Version: 24.09 01:03:23.632 Recommended Arb Burst: 6 01:03:23.632 IEEE OUI Identifier: e4 d2 5c 01:03:23.632 Multi-path I/O 01:03:23.632 May have multiple subsystem ports: Yes 01:03:23.632 May have multiple controllers: Yes 01:03:23.632 Associated with SR-IOV VF: No 01:03:23.632 Max Data Transfer Size: 131072 01:03:23.632 Max Number of Namespaces: 32 01:03:23.632 Max Number of I/O Queues: 127 01:03:23.632 NVMe Specification Version (VS): 1.3 01:03:23.632 NVMe Specification Version (Identify): 1.3 01:03:23.632 Maximum Queue Entries: 128 01:03:23.632 Contiguous Queues Required: Yes 01:03:23.632 Arbitration Mechanisms Supported 01:03:23.632 Weighted Round Robin: Not Supported 01:03:23.632 Vendor Specific: Not Supported 01:03:23.632 Reset Timeout: 15000 ms 01:03:23.632 Doorbell Stride: 4 bytes 01:03:23.632 NVM Subsystem Reset: Not Supported 01:03:23.632 Command Sets Supported 01:03:23.632 NVM Command Set: Supported 01:03:23.632 Boot Partition: Not Supported 01:03:23.632 Memory Page Size Minimum: 4096 bytes 01:03:23.632 Memory Page Size Maximum: 4096 bytes 01:03:23.632 Persistent Memory Region: Not Supported 01:03:23.632 Optional Asynchronous Events Supported 01:03:23.632 Namespace Attribute Notices: Supported 01:03:23.632 Firmware Activation Notices: Not Supported 01:03:23.632 ANA Change Notices: Not Supported 01:03:23.632 PLE Aggregate Log Change Notices: Not Supported 01:03:23.632 LBA Status Info Alert Notices: Not Supported 01:03:23.632 EGE Aggregate Log Change Notices: Not Supported 01:03:23.632 Normal NVM Subsystem Shutdown event: Not Supported 01:03:23.632 Zone Descriptor Change Notices: Not Supported 01:03:23.632 Discovery Log Change Notices: Not Supported 01:03:23.632 Controller Attributes 01:03:23.632 128-bit Host Identifier: Supported 01:03:23.632 Non-Operational Permissive Mode: Not Supported 01:03:23.632 NVM Sets: Not Supported 01:03:23.632 Read Recovery Levels: Not Supported 01:03:23.632 Endurance Groups: Not Supported 01:03:23.632 Predictable Latency Mode: Not Supported 01:03:23.632 Traffic Based Keep ALive: Not Supported 01:03:23.632 Namespace Granularity: Not Supported 01:03:23.632 SQ Associations: Not Supported 01:03:23.632 UUID List: Not Supported 01:03:23.632 Multi-Domain Subsystem: Not Supported 01:03:23.632 Fixed Capacity Management: Not Supported 01:03:23.632 Variable Capacity Management: Not Supported 01:03:23.632 Delete Endurance Group: Not Supported 01:03:23.632 Delete NVM Set: Not Supported 01:03:23.632 Extended LBA Formats Supported: Not Supported 01:03:23.632 Flexible Data Placement Supported: Not Supported 01:03:23.632 01:03:23.632 Controller Memory Buffer Support 01:03:23.632 ================================ 01:03:23.632 Supported: No 01:03:23.632 01:03:23.632 Persistent Memory Region Support 01:03:23.632 ================================ 01:03:23.632 Supported: No 01:03:23.632 01:03:23.632 Admin Command Set Attributes 01:03:23.632 ============================ 01:03:23.632 Security Send/Receive: Not Supported 01:03:23.632 Format NVM: Not Supported 01:03:23.632 Firmware Activate/Download: Not Supported 01:03:23.632 Namespace Management: Not Supported 01:03:23.632 Device Self-Test: Not Supported 01:03:23.632 Directives: Not Supported 01:03:23.632 NVMe-MI: Not Supported 01:03:23.632 Virtualization Management: Not Supported 01:03:23.632 Doorbell Buffer Config: Not Supported 01:03:23.632 Get LBA Status Capability: Not Supported 01:03:23.632 Command & Feature Lockdown Capability: Not Supported 01:03:23.632 Abort Command Limit: 4 01:03:23.632 Async Event Request Limit: 4 01:03:23.632 Number of Firmware Slots: N/A 01:03:23.632 Firmware Slot 1 Read-Only: N/A 01:03:23.632 Firmware Activation Without Reset: N/A 01:03:23.632 Multiple Update Detection Support: N/A 01:03:23.632 Firmware Update Granularity: No Information Provided 01:03:23.632 Per-Namespace SMART Log: No 01:03:23.632 Asymmetric Namespace Access Log Page: Not Supported 01:03:23.632 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:03:23.632 Command Effects Log Page: Supported 01:03:23.632 Get Log Page Extended Data: Supported 01:03:23.632 Telemetry Log Pages: Not Supported 01:03:23.632 Persistent Event Log Pages: Not Supported 01:03:23.632 Supported Log Pages Log Page: May Support 01:03:23.632 Commands Supported & Effects Log Page: Not Supported 01:03:23.632 Feature Identifiers & Effects Log Page:May Support 01:03:23.632 NVMe-MI Commands & Effects Log Page: May Support 01:03:23.632 Data Area 4 for Telemetry Log: Not Supported 01:03:23.632 Error Log Page Entries Supported: 128 01:03:23.632 Keep Alive: Supported 01:03:23.632 Keep Alive Granularity: 10000 ms 01:03:23.632 01:03:23.632 NVM Command Set Attributes 01:03:23.632 ========================== 01:03:23.632 Submission Queue Entry Size 01:03:23.632 Max: 64 01:03:23.632 Min: 64 01:03:23.632 Completion Queue Entry Size 01:03:23.632 Max: 16 01:03:23.632 Min: 16 01:03:23.632 Number of Namespaces: 32 01:03:23.632 Compare Command: Supported 01:03:23.632 Write Uncorrectable Command: Not Supported 01:03:23.632 Dataset Management Command: Supported 01:03:23.632 Write Zeroes Command: Supported 01:03:23.632 Set Features Save Field: Not Supported 01:03:23.632 Reservations: Supported 01:03:23.632 Timestamp: Not Supported 01:03:23.632 Copy: Supported 01:03:23.632 Volatile Write Cache: Present 01:03:23.632 Atomic Write Unit (Normal): 1 01:03:23.632 Atomic Write Unit (PFail): 1 01:03:23.632 Atomic Compare & Write Unit: 1 01:03:23.632 Fused Compare & Write: Supported 01:03:23.632 Scatter-Gather List 01:03:23.632 SGL Command Set: Supported 01:03:23.632 SGL Keyed: Supported 01:03:23.632 SGL Bit Bucket Descriptor: Not Supported 01:03:23.632 SGL Metadata Pointer: Not Supported 01:03:23.632 Oversized SGL: Not Supported 01:03:23.632 SGL Metadata Address: Not Supported 01:03:23.632 SGL Offset: Supported 01:03:23.632 Transport SGL Data Block: Not Supported 01:03:23.632 Replay Protected Memory Block: Not Supported 01:03:23.632 01:03:23.632 Firmware Slot Information 01:03:23.632 ========================= 01:03:23.632 Active slot: 1 01:03:23.632 Slot 1 Firmware Revision: 24.09 01:03:23.632 01:03:23.632 01:03:23.632 Commands Supported and Effects 01:03:23.632 ============================== 01:03:23.632 Admin Commands 01:03:23.632 -------------- 01:03:23.632 Get Log Page (02h): Supported 01:03:23.632 Identify (06h): Supported 01:03:23.632 Abort (08h): Supported 01:03:23.632 Set Features (09h): Supported 01:03:23.632 Get Features (0Ah): Supported 01:03:23.632 Asynchronous Event Request (0Ch): Supported 01:03:23.632 Keep Alive (18h): Supported 01:03:23.632 I/O Commands 01:03:23.632 ------------ 01:03:23.633 Flush (00h): Supported LBA-Change 01:03:23.633 Write (01h): Supported LBA-Change 01:03:23.633 Read (02h): Supported 01:03:23.633 Compare (05h): Supported 01:03:23.633 Write Zeroes (08h): Supported LBA-Change 01:03:23.633 Dataset Management (09h): Supported LBA-Change 01:03:23.633 Copy (19h): Supported LBA-Change 01:03:23.633 01:03:23.633 Error Log 01:03:23.633 ========= 01:03:23.633 01:03:23.633 Arbitration 01:03:23.633 =========== 01:03:23.633 Arbitration Burst: 1 01:03:23.633 01:03:23.633 Power Management 01:03:23.633 ================ 01:03:23.633 Number of Power States: 1 01:03:23.633 Current Power State: Power State #0 01:03:23.633 Power State #0: 01:03:23.633 Max Power: 0.00 W 01:03:23.633 Non-Operational State: Operational 01:03:23.633 Entry Latency: Not Reported 01:03:23.633 Exit Latency: Not Reported 01:03:23.633 Relative Read Throughput: 0 01:03:23.633 Relative Read Latency: 0 01:03:23.633 Relative Write Throughput: 0 01:03:23.633 Relative Write Latency: 0 01:03:23.633 Idle Power: Not Reported 01:03:23.633 Active Power: Not Reported 01:03:23.633 Non-Operational Permissive Mode: Not Supported 01:03:23.633 01:03:23.633 Health Information 01:03:23.633 ================== 01:03:23.633 Critical Warnings: 01:03:23.633 Available Spare Space: OK 01:03:23.633 Temperature: OK 01:03:23.633 Device Reliability: OK 01:03:23.633 Read Only: No 01:03:23.633 Volatile Memory Backup: OK 01:03:23.633 Current Temperature: 0 Kelvin (-273 Celsius) 01:03:23.633 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:03:23.633 Available Spare: 0% 01:03:23.633 Available Spare Threshold: 0% 01:03:23.633 Life Percentage Used:[2024-07-22 10:58:31.323210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.323218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1585d00) 01:03:23.633 [2024-07-22 10:58:31.323228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.633 [2024-07-22 10:58:31.323254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ccf80, cid 7, qid 0 01:03:23.633 [2024-07-22 10:58:31.327295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.633 [2024-07-22 10:58:31.327322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.633 [2024-07-22 10:58:31.327328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ccf80) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327390] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 01:03:23.633 [2024-07-22 10:58:31.327401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc500) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.633 [2024-07-22 10:58:31.327417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc680) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.633 [2024-07-22 10:58:31.327430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc800) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.633 [2024-07-22 10:58:31.327443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:23.633 [2024-07-22 10:58:31.327462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.633 [2024-07-22 10:58:31.327481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.633 [2024-07-22 10:58:31.327515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.633 [2024-07-22 10:58:31.327567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.633 [2024-07-22 10:58:31.327574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.633 [2024-07-22 10:58:31.327578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.633 [2024-07-22 10:58:31.327608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.633 [2024-07-22 10:58:31.327626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.633 [2024-07-22 10:58:31.327694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.633 [2024-07-22 10:58:31.327700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.633 [2024-07-22 10:58:31.327705] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327716] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 01:03:23.633 [2024-07-22 10:58:31.327723] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 01:03:23.633 [2024-07-22 10:58:31.327735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.633 [2024-07-22 10:58:31.327752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.633 [2024-07-22 10:58:31.327768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.633 [2024-07-22 10:58:31.327815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.633 [2024-07-22 10:58:31.327822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.633 [2024-07-22 10:58:31.327827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.633 [2024-07-22 10:58:31.327866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.633 [2024-07-22 10:58:31.327882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.633 [2024-07-22 10:58:31.327929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.633 [2024-07-22 10:58:31.327937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.633 [2024-07-22 10:58:31.327941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.327957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.327968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.633 [2024-07-22 10:58:31.327978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.633 [2024-07-22 10:58:31.327995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.633 [2024-07-22 10:58:31.328036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.633 [2024-07-22 10:58:31.328046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.633 [2024-07-22 10:58:31.328053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.328060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.633 [2024-07-22 10:58:31.328071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.633 [2024-07-22 10:58:31.328076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.328895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.328915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.328959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.328966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.328971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.328985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.328996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.329006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.329028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.329068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.329077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.329082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.329097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.329119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.329137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.329182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.329189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.329194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.329211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.329228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.329244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.329300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.329310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.329316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.329331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.329350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.329373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.329416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.329425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.634 [2024-07-22 10:58:31.329430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.634 [2024-07-22 10:58:31.329451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.634 [2024-07-22 10:58:31.329463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.634 [2024-07-22 10:58:31.329470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.634 [2024-07-22 10:58:31.329487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.634 [2024-07-22 10:58:31.329529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.634 [2024-07-22 10:58:31.329538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.329543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.329558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.329577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.329599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.329636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.329645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.329651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.329670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.329701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.329721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.329764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.329771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.329776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.329791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.329814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.329835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.329883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.329893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.329899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.329917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.329927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.329935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.329955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.329994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.635 [2024-07-22 10:58:31.330892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.635 [2024-07-22 10:58:31.330902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.635 [2024-07-22 10:58:31.330907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.635 [2024-07-22 10:58:31.330927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.635 [2024-07-22 10:58:31.330940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.635 [2024-07-22 10:58:31.330949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.635 [2024-07-22 10:58:31.330965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.636 [2024-07-22 10:58:31.331006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.636 [2024-07-22 10:58:31.331016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.636 [2024-07-22 10:58:31.331020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.636 [2024-07-22 10:58:31.331035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.636 [2024-07-22 10:58:31.331055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.636 [2024-07-22 10:58:31.331074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.636 [2024-07-22 10:58:31.331120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.636 [2024-07-22 10:58:31.331129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.636 [2024-07-22 10:58:31.331136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.636 [2024-07-22 10:58:31.331152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.636 [2024-07-22 10:58:31.331171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.636 [2024-07-22 10:58:31.331192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.636 [2024-07-22 10:58:31.331237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.636 [2024-07-22 10:58:31.331246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.636 [2024-07-22 10:58:31.331253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.331259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.636 [2024-07-22 10:58:31.335305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.335317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.335322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1585d00) 01:03:23.636 [2024-07-22 10:58:31.335332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.636 [2024-07-22 10:58:31.335365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cc980, cid 3, qid 0 01:03:23.636 [2024-07-22 10:58:31.335417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:23.636 [2024-07-22 10:58:31.335424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:23.636 [2024-07-22 10:58:31.335429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:23.636 [2024-07-22 10:58:31.335434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cc980) on tqpair=0x1585d00 01:03:23.636 [2024-07-22 10:58:31.335443] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 01:03:23.636 0% 01:03:23.636 Data Units Read: 0 01:03:23.636 Data Units Written: 0 01:03:23.636 Host Read Commands: 0 01:03:23.636 Host Write Commands: 0 01:03:23.636 Controller Busy Time: 0 minutes 01:03:23.636 Power Cycles: 0 01:03:23.636 Power On Hours: 0 hours 01:03:23.636 Unsafe Shutdowns: 0 01:03:23.636 Unrecoverable Media Errors: 0 01:03:23.636 Lifetime Error Log Entries: 0 01:03:23.636 Warning Temperature Time: 0 minutes 01:03:23.636 Critical Temperature Time: 0 minutes 01:03:23.636 01:03:23.636 Number of Queues 01:03:23.636 ================ 01:03:23.636 Number of I/O Submission Queues: 127 01:03:23.636 Number of I/O Completion Queues: 127 01:03:23.636 01:03:23.636 Active Namespaces 01:03:23.636 ================= 01:03:23.636 Namespace ID:1 01:03:23.636 Error Recovery Timeout: Unlimited 01:03:23.636 Command Set Identifier: NVM (00h) 01:03:23.636 Deallocate: Supported 01:03:23.636 Deallocated/Unwritten Error: Not Supported 01:03:23.636 Deallocated Read Value: Unknown 01:03:23.636 Deallocate in Write Zeroes: Not Supported 01:03:23.636 Deallocated Guard Field: 0xFFFF 01:03:23.636 Flush: Supported 01:03:23.636 Reservation: Supported 01:03:23.636 Namespace Sharing Capabilities: Multiple Controllers 01:03:23.636 Size (in LBAs): 131072 (0GiB) 01:03:23.636 Capacity (in LBAs): 131072 (0GiB) 01:03:23.636 Utilization (in LBAs): 131072 (0GiB) 01:03:23.636 NGUID: ABCDEF0123456789ABCDEF0123456789 01:03:23.636 EUI64: ABCDEF0123456789 01:03:23.636 UUID: 95e02e9e-6420-44ea-98ad-1264ce540481 01:03:23.636 Thin Provisioning: Not Supported 01:03:23.636 Per-NS Atomic Units: Yes 01:03:23.636 Atomic Boundary Size (Normal): 0 01:03:23.636 Atomic Boundary Size (PFail): 0 01:03:23.636 Atomic Boundary Offset: 0 01:03:23.636 Maximum Single Source Range Length: 65535 01:03:23.636 Maximum Copy Length: 65535 01:03:23.636 Maximum Source Range Count: 1 01:03:23.636 NGUID/EUI64 Never Reused: No 01:03:23.636 Namespace Write Protected: No 01:03:23.636 Number of LBA Formats: 1 01:03:23.636 Current LBA Format: LBA Format #00 01:03:23.636 LBA Format #00: Data Size: 512 Metadata Size: 0 01:03:23.636 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:03:23.636 rmmod nvme_tcp 01:03:23.636 rmmod nvme_fabrics 01:03:23.636 rmmod nvme_keyring 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 105066 ']' 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 105066 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 105066 ']' 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 105066 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105066 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105066' 01:03:23.636 killing process with pid 105066 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 105066 01:03:23.636 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 105066 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:03:23.895 01:03:23.895 real 0m2.745s 01:03:23.895 user 0m7.212s 01:03:23.895 sys 0m0.835s 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:23.895 10:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:23.895 ************************************ 01:03:23.895 END TEST nvmf_identify 01:03:23.895 ************************************ 01:03:24.153 10:58:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:03:24.153 10:58:31 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:03:24.154 10:58:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:03:24.154 10:58:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:24.154 10:58:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:24.154 ************************************ 01:03:24.154 START TEST nvmf_perf 01:03:24.154 ************************************ 01:03:24.154 10:58:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:03:24.154 * Looking for test storage... 01:03:24.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:24.154 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:03:24.413 Cannot find device "nvmf_tgt_br" 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:03:24.413 Cannot find device "nvmf_tgt_br2" 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:03:24.413 Cannot find device "nvmf_tgt_br" 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:03:24.413 Cannot find device "nvmf_tgt_br2" 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:24.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:24.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:24.413 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:03:24.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:24.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 01:03:24.674 01:03:24.674 --- 10.0.0.2 ping statistics --- 01:03:24.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:24.674 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:03:24.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:24.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 01:03:24.674 01:03:24.674 --- 10.0.0.3 ping statistics --- 01:03:24.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:24.674 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:24.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:24.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 01:03:24.674 01:03:24.674 --- 10.0.0.1 ping statistics --- 01:03:24.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:24.674 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=105293 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 105293 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 105293 ']' 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:24.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:24.674 10:58:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:24.989 [2024-07-22 10:58:32.643341] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:03:24.989 [2024-07-22 10:58:32.643417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:24.989 [2024-07-22 10:58:32.763553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:03:24.989 [2024-07-22 10:58:32.787315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:24.989 [2024-07-22 10:58:32.836648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:24.989 [2024-07-22 10:58:32.836938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:24.989 [2024-07-22 10:58:32.837077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:24.989 [2024-07-22 10:58:32.837123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:24.989 [2024-07-22 10:58:32.837148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:24.989 [2024-07-22 10:58:32.837313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:24.989 [2024-07-22 10:58:32.837501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:24.989 [2024-07-22 10:58:32.838370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:24.989 [2024-07-22 10:58:32.838371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:03:25.931 10:58:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:03:26.190 10:58:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:03:26.190 10:58:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:03:26.448 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:03:26.706 [2024-07-22 10:58:34.533846] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:26.706 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:26.965 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:03:26.965 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:27.224 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:03:27.224 10:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:03:27.483 10:58:35 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:27.483 [2024-07-22 10:58:35.345991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:27.483 10:58:35 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:03:27.741 10:58:35 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:03:27.741 10:58:35 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:03:27.741 10:58:35 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:03:27.741 10:58:35 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:03:29.115 Initializing NVMe Controllers 01:03:29.115 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:03:29.115 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:03:29.115 Initialization complete. Launching workers. 01:03:29.115 ======================================================== 01:03:29.115 Latency(us) 01:03:29.115 Device Information : IOPS MiB/s Average min max 01:03:29.116 PCIE (0000:00:10.0) NSID 1 from core 0: 20141.88 78.68 1589.06 318.14 8573.38 01:03:29.116 ======================================================== 01:03:29.116 Total : 20141.88 78.68 1589.06 318.14 8573.38 01:03:29.116 01:03:29.116 10:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:03:30.051 Initializing NVMe Controllers 01:03:30.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:30.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:30.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:30.051 Initialization complete. Launching workers. 01:03:30.051 ======================================================== 01:03:30.051 Latency(us) 01:03:30.051 Device Information : IOPS MiB/s Average min max 01:03:30.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4765.30 18.61 209.63 80.86 4261.89 01:03:30.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.67 0.48 8151.64 6016.51 12047.32 01:03:30.051 ======================================================== 01:03:30.051 Total : 4887.97 19.09 408.95 80.86 12047.32 01:03:30.051 01:03:30.309 10:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:03:31.685 Initializing NVMe Controllers 01:03:31.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:31.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:31.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:31.685 Initialization complete. Launching workers. 01:03:31.685 ======================================================== 01:03:31.685 Latency(us) 01:03:31.685 Device Information : IOPS MiB/s Average min max 01:03:31.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11258.86 43.98 2842.48 424.28 9069.40 01:03:31.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2649.73 10.35 12175.32 6890.51 23273.85 01:03:31.685 ======================================================== 01:03:31.685 Total : 13908.60 54.33 4620.49 424.28 23273.85 01:03:31.685 01:03:31.685 10:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:03:31.685 10:58:39 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:03:34.221 Initializing NVMe Controllers 01:03:34.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:34.221 Controller IO queue size 128, less than required. 01:03:34.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:34.221 Controller IO queue size 128, less than required. 01:03:34.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:34.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:34.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:34.221 Initialization complete. Launching workers. 01:03:34.221 ======================================================== 01:03:34.221 Latency(us) 01:03:34.221 Device Information : IOPS MiB/s Average min max 01:03:34.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2271.51 567.88 56978.50 38056.82 107752.52 01:03:34.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 568.50 142.13 230390.55 69930.94 370174.70 01:03:34.221 ======================================================== 01:03:34.221 Total : 2840.01 710.00 91691.39 38056.82 370174.70 01:03:34.221 01:03:34.221 10:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 01:03:34.221 Initializing NVMe Controllers 01:03:34.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:34.221 Controller IO queue size 128, less than required. 01:03:34.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:34.221 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:03:34.221 Controller IO queue size 128, less than required. 01:03:34.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:34.221 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:03:34.221 WARNING: Some requested NVMe devices were skipped 01:03:34.221 No valid NVMe controllers or AIO or URING devices found 01:03:34.221 10:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 01:03:36.751 Initializing NVMe Controllers 01:03:36.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:36.751 Controller IO queue size 128, less than required. 01:03:36.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:36.751 Controller IO queue size 128, less than required. 01:03:36.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:36.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:36.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:36.751 Initialization complete. Launching workers. 01:03:36.751 01:03:36.751 ==================== 01:03:36.751 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:03:36.751 TCP transport: 01:03:36.751 polls: 12534 01:03:36.751 idle_polls: 5055 01:03:36.751 sock_completions: 7479 01:03:36.751 nvme_completions: 4577 01:03:36.751 submitted_requests: 6842 01:03:36.751 queued_requests: 1 01:03:36.751 01:03:36.751 ==================== 01:03:36.751 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:03:36.751 TCP transport: 01:03:36.751 polls: 12262 01:03:36.751 idle_polls: 7278 01:03:36.751 sock_completions: 4984 01:03:36.751 nvme_completions: 8715 01:03:36.751 submitted_requests: 13128 01:03:36.751 queued_requests: 1 01:03:36.751 ======================================================== 01:03:36.751 Latency(us) 01:03:36.751 Device Information : IOPS MiB/s Average min max 01:03:36.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1143.78 285.95 114940.08 73185.88 194632.27 01:03:36.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2178.09 544.52 58690.89 30991.24 110976.16 01:03:36.751 ======================================================== 01:03:36.751 Total : 3321.88 830.47 78058.56 30991.24 194632.27 01:03:36.751 01:03:36.751 10:58:44 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 01:03:36.751 10:58:44 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:37.007 10:58:44 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 01:03:37.007 10:58:44 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 01:03:37.007 10:58:44 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=368e21c7-ec55-41d6-943f-e7804435a1d1 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 368e21c7-ec55-41d6-943f-e7804435a1d1 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=368e21c7-ec55-41d6-943f-e7804435a1d1 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 01:03:37.264 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:03:37.522 { 01:03:37.522 "base_bdev": "Nvme0n1", 01:03:37.522 "block_size": 4096, 01:03:37.522 "cluster_size": 4194304, 01:03:37.522 "free_clusters": 1278, 01:03:37.522 "name": "lvs_0", 01:03:37.522 "total_data_clusters": 1278, 01:03:37.522 "uuid": "368e21c7-ec55-41d6-943f-e7804435a1d1" 01:03:37.522 } 01:03:37.522 ]' 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="368e21c7-ec55-41d6-943f-e7804435a1d1") .free_clusters' 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="368e21c7-ec55-41d6-943f-e7804435a1d1") .cluster_size' 01:03:37.522 5112 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 01:03:37.522 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 368e21c7-ec55-41d6-943f-e7804435a1d1 lbd_0 5112 01:03:37.779 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=95f4ca00-6c4e-467c-bc0f-c81526f7bfa3 01:03:37.779 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 95f4ca00-6c4e-467c-bc0f-c81526f7bfa3 lvs_n_0 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=a39c9dfe-913e-4d00-9000-e79fe479071f 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb a39c9dfe-913e-4d00-9000-e79fe479071f 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a39c9dfe-913e-4d00-9000-e79fe479071f 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 01:03:38.052 10:58:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:03:38.309 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:03:38.309 { 01:03:38.309 "base_bdev": "Nvme0n1", 01:03:38.309 "block_size": 4096, 01:03:38.309 "cluster_size": 4194304, 01:03:38.309 "free_clusters": 0, 01:03:38.309 "name": "lvs_0", 01:03:38.309 "total_data_clusters": 1278, 01:03:38.309 "uuid": "368e21c7-ec55-41d6-943f-e7804435a1d1" 01:03:38.309 }, 01:03:38.309 { 01:03:38.309 "base_bdev": "95f4ca00-6c4e-467c-bc0f-c81526f7bfa3", 01:03:38.309 "block_size": 4096, 01:03:38.309 "cluster_size": 4194304, 01:03:38.309 "free_clusters": 1276, 01:03:38.309 "name": "lvs_n_0", 01:03:38.309 "total_data_clusters": 1276, 01:03:38.309 "uuid": "a39c9dfe-913e-4d00-9000-e79fe479071f" 01:03:38.309 } 01:03:38.309 ]' 01:03:38.309 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a39c9dfe-913e-4d00-9000-e79fe479071f") .free_clusters' 01:03:38.309 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 01:03:38.309 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a39c9dfe-913e-4d00-9000-e79fe479071f") .cluster_size' 01:03:38.309 5104 01:03:38.309 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 01:03:38.309 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 01:03:38.310 10:58:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 01:03:38.310 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 01:03:38.310 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a39c9dfe-913e-4d00-9000-e79fe479071f lbd_nest_0 5104 01:03:38.567 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=8f756140-5c0e-449d-ab0c-ec8d4410b9da 01:03:38.567 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:38.824 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 01:03:38.824 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8f756140-5c0e-449d-ab0c-ec8d4410b9da 01:03:38.824 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:39.082 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 01:03:39.082 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 01:03:39.082 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:03:39.082 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:03:39.082 10:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:03:39.339 Initializing NVMe Controllers 01:03:39.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:39.339 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:03:39.339 WARNING: Some requested NVMe devices were skipped 01:03:39.339 No valid NVMe controllers or AIO or URING devices found 01:03:39.339 10:58:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:03:39.339 10:58:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:03:51.541 Initializing NVMe Controllers 01:03:51.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:51.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:51.541 Initialization complete. Launching workers. 01:03:51.541 ======================================================== 01:03:51.541 Latency(us) 01:03:51.541 Device Information : IOPS MiB/s Average min max 01:03:51.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1056.27 132.03 946.16 277.31 7559.31 01:03:51.541 ======================================================== 01:03:51.541 Total : 1056.27 132.03 946.16 277.31 7559.31 01:03:51.541 01:03:51.541 10:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:03:51.541 10:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:03:51.541 10:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:03:51.541 Initializing NVMe Controllers 01:03:51.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:03:51.541 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:03:51.541 WARNING: Some requested NVMe devices were skipped 01:03:51.541 No valid NVMe controllers or AIO or URING devices found 01:03:51.541 10:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:03:51.541 10:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:01.512 Initializing NVMe Controllers 01:04:01.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:04:01.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:04:01.512 Initialization complete. Launching workers. 01:04:01.512 ======================================================== 01:04:01.512 Latency(us) 01:04:01.512 Device Information : IOPS MiB/s Average min max 01:04:01.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 865.02 108.13 37047.72 7082.14 440418.96 01:04:01.512 ======================================================== 01:04:01.512 Total : 865.02 108.13 37047.72 7082.14 440418.96 01:04:01.512 01:04:01.512 10:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:04:01.512 10:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:04:01.512 10:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:01.512 Initializing NVMe Controllers 01:04:01.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:04:01.512 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:04:01.512 WARNING: Some requested NVMe devices were skipped 01:04:01.512 No valid NVMe controllers or AIO or URING devices found 01:04:01.512 10:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:04:01.512 10:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:11.522 Initializing NVMe Controllers 01:04:11.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:04:11.522 Controller IO queue size 128, less than required. 01:04:11.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:04:11.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:04:11.522 Initialization complete. Launching workers. 01:04:11.522 ======================================================== 01:04:11.522 Latency(us) 01:04:11.522 Device Information : IOPS MiB/s Average min max 01:04:11.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4943.23 617.90 25902.56 7343.43 64976.15 01:04:11.522 ======================================================== 01:04:11.522 Total : 4943.23 617.90 25902.56 7343.43 64976.15 01:04:11.522 01:04:11.522 10:59:18 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:11.522 10:59:18 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8f756140-5c0e-449d-ab0c-ec8d4410b9da 01:04:11.522 10:59:19 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 01:04:11.781 10:59:19 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 95f4ca00-6c4e-467c-bc0f-c81526f7bfa3 01:04:11.781 10:59:19 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 01:04:12.039 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:04:12.039 rmmod nvme_tcp 01:04:12.039 rmmod nvme_fabrics 01:04:12.039 rmmod nvme_keyring 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 105293 ']' 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 105293 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 105293 ']' 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 105293 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:04:12.298 10:59:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105293 01:04:12.298 killing process with pid 105293 01:04:12.298 10:59:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:04:12.298 10:59:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:04:12.298 10:59:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105293' 01:04:12.298 10:59:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 105293 01:04:12.298 10:59:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 105293 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:04:13.701 ************************************ 01:04:13.701 END TEST nvmf_perf 01:04:13.701 ************************************ 01:04:13.701 01:04:13.701 real 0m49.613s 01:04:13.701 user 3m2.976s 01:04:13.701 sys 0m12.578s 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:13.701 10:59:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:04:13.701 10:59:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:04:13.701 10:59:21 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:04:13.701 10:59:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:04:13.701 10:59:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:13.701 10:59:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:04:13.701 ************************************ 01:04:13.701 START TEST nvmf_fio_host 01:04:13.701 ************************************ 01:04:13.701 10:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:04:13.960 * Looking for test storage... 01:04:13.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.960 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:04:13.961 Cannot find device "nvmf_tgt_br" 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:04:13.961 Cannot find device "nvmf_tgt_br2" 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:04:13.961 Cannot find device "nvmf_tgt_br" 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:04:13.961 Cannot find device "nvmf_tgt_br2" 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:13.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:13.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:13.961 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:04:14.219 10:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:04:14.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:14.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 01:04:14.219 01:04:14.219 --- 10.0.0.2 ping statistics --- 01:04:14.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:14.219 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:04:14.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:14.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 01:04:14.219 01:04:14.219 --- 10.0.0.3 ping statistics --- 01:04:14.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:14.219 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:14.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:14.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:04:14.219 01:04:14.219 --- 10.0.0.1 ping statistics --- 01:04:14.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:14.219 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=106238 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 106238 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 106238 ']' 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:14.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:04:14.219 10:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:04:14.476 [2024-07-22 10:59:22.157912] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:04:14.476 [2024-07-22 10:59:22.157993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:14.476 [2024-07-22 10:59:22.279457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:04:14.476 [2024-07-22 10:59:22.288691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:14.476 [2024-07-22 10:59:22.335900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:14.476 [2024-07-22 10:59:22.335952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:14.476 [2024-07-22 10:59:22.335961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:14.476 [2024-07-22 10:59:22.335969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:14.476 [2024-07-22 10:59:22.335976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:14.476 [2024-07-22 10:59:22.336180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:04:14.476 [2024-07-22 10:59:22.336392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:04:14.476 [2024-07-22 10:59:22.337124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:14.476 [2024-07-22 10:59:22.337125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:04:15.405 10:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:15.406 10:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 01:04:15.406 10:59:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:04:15.406 [2024-07-22 10:59:23.256234] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:15.406 10:59:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:04:15.406 10:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 01:04:15.406 10:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:04:15.662 10:59:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:04:15.662 Malloc1 01:04:15.663 10:59:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:04:15.920 10:59:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:04:16.178 10:59:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:16.178 [2024-07-22 10:59:24.094898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:04:16.436 10:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:16.694 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:04:16.694 fio-3.35 01:04:16.694 Starting 1 thread 01:04:19.241 01:04:19.241 test: (groupid=0, jobs=1): err= 0: pid=106363: Mon Jul 22 10:59:26 2024 01:04:19.241 read: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(93.9MiB/2005msec) 01:04:19.241 slat (nsec): min=1541, max=172779, avg=1665.69, stdev=1424.73 01:04:19.241 clat (usec): min=1982, max=9919, avg=5593.33, stdev=369.24 01:04:19.241 lat (usec): min=2005, max=9921, avg=5595.00, stdev=369.15 01:04:19.241 clat percentiles (usec): 01:04:19.241 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5342], 01:04:19.241 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5669], 01:04:19.241 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6128], 01:04:19.241 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7701], 99.95th=[ 9241], 01:04:19.241 | 99.99th=[ 9765] 01:04:19.241 bw ( KiB/s): min=47216, max=48552, per=100.00%, avg=47952.00, stdev=619.99, samples=4 01:04:19.241 iops : min=11804, max=12138, avg=11988.00, stdev=155.00, samples=4 01:04:19.241 write: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(93.5MiB/2005msec); 0 zone resets 01:04:19.241 slat (nsec): min=1599, max=113355, avg=1722.88, stdev=890.57 01:04:19.241 clat (usec): min=1083, max=9411, avg=5073.79, stdev=339.88 01:04:19.241 lat (usec): min=1090, max=9413, avg=5075.51, stdev=339.83 01:04:19.241 clat percentiles (usec): 01:04:19.241 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4686], 20.00th=[ 4817], 01:04:19.241 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 01:04:19.241 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 01:04:19.241 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7963], 99.95th=[ 9110], 01:04:19.241 | 99.99th=[ 9372] 01:04:19.241 bw ( KiB/s): min=47344, max=48064, per=99.95%, avg=47730.00, stdev=297.64, samples=4 01:04:19.241 iops : min=11836, max=12016, avg=11932.50, stdev=74.41, samples=4 01:04:19.241 lat (msec) : 2=0.03%, 4=0.20%, 10=99.78% 01:04:19.241 cpu : usr=66.17%, sys=25.50%, ctx=33, majf=0, minf=7 01:04:19.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:04:19.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:19.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:19.241 issued rwts: total=24032,23936,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:19.241 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:19.241 01:04:19.241 Run status group 0 (all jobs): 01:04:19.241 READ: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=93.9MiB (98.4MB), run=2005-2005msec 01:04:19.241 WRITE: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=93.5MiB (98.0MB), run=2005-2005msec 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:04:19.241 10:59:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:04:19.241 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:04:19.241 fio-3.35 01:04:19.241 Starting 1 thread 01:04:21.770 01:04:21.770 test: (groupid=0, jobs=1): err= 0: pid=106410: Mon Jul 22 10:59:29 2024 01:04:21.770 read: IOPS=10.7k, BW=168MiB/s (176MB/s)(336MiB/2004msec) 01:04:21.770 slat (nsec): min=2508, max=90146, avg=2809.41, stdev=1454.24 01:04:21.771 clat (usec): min=1902, max=21007, avg=7032.13, stdev=1865.70 01:04:21.771 lat (usec): min=1904, max=21017, avg=7034.94, stdev=1865.98 01:04:21.771 clat percentiles (usec): 01:04:21.771 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5538], 01:04:21.771 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7504], 01:04:21.771 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 9503], 01:04:21.771 | 99.00th=[12649], 99.50th=[17695], 99.90th=[20579], 99.95th=[20841], 01:04:21.771 | 99.99th=[20841] 01:04:21.771 bw ( KiB/s): min=72448, max=98112, per=49.79%, avg=85600.00, stdev=12045.21, samples=4 01:04:21.771 iops : min= 4528, max= 6132, avg=5350.00, stdev=752.83, samples=4 01:04:21.771 write: IOPS=6435, BW=101MiB/s (105MB/s)(175MiB/1742msec); 0 zone resets 01:04:21.771 slat (usec): min=28, max=266, avg=30.25, stdev= 5.21 01:04:21.771 clat (usec): min=3271, max=18768, avg=8487.96, stdev=1453.32 01:04:21.771 lat (usec): min=3302, max=18834, avg=8518.21, stdev=1454.15 01:04:21.771 clat percentiles (usec): 01:04:21.771 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 01:04:21.771 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 01:04:21.771 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11338], 01:04:21.771 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14615], 99.95th=[15664], 01:04:21.771 | 99.99th=[16188] 01:04:21.771 bw ( KiB/s): min=76384, max=101408, per=86.80%, avg=89376.00, stdev=11891.92, samples=4 01:04:21.771 iops : min= 4774, max= 6338, avg=5586.00, stdev=743.25, samples=4 01:04:21.771 lat (msec) : 2=0.01%, 4=1.23%, 10=91.90%, 20=6.65%, 50=0.22% 01:04:21.771 cpu : usr=72.89%, sys=17.47%, ctx=509, majf=0, minf=7 01:04:21.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 01:04:21.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:21.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:21.771 issued rwts: total=21534,11211,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:21.771 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:21.771 01:04:21.771 Run status group 0 (all jobs): 01:04:21.771 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=336MiB (353MB), run=2004-2004msec 01:04:21.771 WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=175MiB (184MB), run=1742-1742msec 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:04:21.771 10:59:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 01:04:22.029 Nvme0n1 01:04:22.029 10:59:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a30b8fd1-23a9-4806-99ff-32205c63f7f7 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a30b8fd1-23a9-4806-99ff-32205c63f7f7 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a30b8fd1-23a9-4806-99ff-32205c63f7f7 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 01:04:22.287 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:04:22.545 { 01:04:22.545 "base_bdev": "Nvme0n1", 01:04:22.545 "block_size": 4096, 01:04:22.545 "cluster_size": 1073741824, 01:04:22.545 "free_clusters": 4, 01:04:22.545 "name": "lvs_0", 01:04:22.545 "total_data_clusters": 4, 01:04:22.545 "uuid": "a30b8fd1-23a9-4806-99ff-32205c63f7f7" 01:04:22.545 } 01:04:22.545 ]' 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a30b8fd1-23a9-4806-99ff-32205c63f7f7") .free_clusters' 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a30b8fd1-23a9-4806-99ff-32205c63f7f7") .cluster_size' 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 01:04:22.545 4096 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 01:04:22.545 10:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 01:04:22.805 2f96f939-d283-4215-87ce-bdd21bf95fab 01:04:22.805 10:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 01:04:22.805 10:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 01:04:23.063 10:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:04:23.321 10:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:23.321 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:23.321 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:04:23.322 10:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:23.620 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:04:23.620 fio-3.35 01:04:23.620 Starting 1 thread 01:04:26.175 01:04:26.175 test: (groupid=0, jobs=1): err= 0: pid=106557: Mon Jul 22 10:59:33 2024 01:04:26.175 read: IOPS=7719, BW=30.2MiB/s (31.6MB/s)(60.6MiB/2008msec) 01:04:26.175 slat (nsec): min=1559, max=284344, avg=1987.68, stdev=3025.22 01:04:26.176 clat (usec): min=3118, max=15343, avg=8716.43, stdev=819.92 01:04:26.176 lat (usec): min=3124, max=15344, avg=8718.42, stdev=819.78 01:04:26.176 clat percentiles (usec): 01:04:26.176 | 1.00th=[ 6980], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8094], 01:04:26.176 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 01:04:26.176 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[10028], 01:04:26.176 | 99.00th=[10945], 99.50th=[11338], 99.90th=[13042], 99.95th=[14484], 01:04:26.176 | 99.99th=[14877] 01:04:26.176 bw ( KiB/s): min=29736, max=31472, per=99.99%, avg=30874.00, stdev=791.30, samples=4 01:04:26.176 iops : min= 7434, max= 7868, avg=7718.50, stdev=197.82, samples=4 01:04:26.176 write: IOPS=7709, BW=30.1MiB/s (31.6MB/s)(60.5MiB/2008msec); 0 zone resets 01:04:26.176 slat (nsec): min=1605, max=174609, avg=2051.32, stdev=1919.36 01:04:26.176 clat (usec): min=1670, max=14655, avg=7828.52, stdev=762.57 01:04:26.176 lat (usec): min=1679, max=14657, avg=7830.57, stdev=762.47 01:04:26.176 clat percentiles (usec): 01:04:26.176 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7308], 01:04:26.176 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 01:04:26.176 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 8979], 01:04:26.176 | 99.00th=[ 9765], 99.50th=[10290], 99.90th=[13435], 99.95th=[13698], 01:04:26.176 | 99.99th=[14484] 01:04:26.176 bw ( KiB/s): min=30336, max=31336, per=99.99%, avg=30834.00, stdev=409.56, samples=4 01:04:26.176 iops : min= 7584, max= 7834, avg=7708.50, stdev=102.39, samples=4 01:04:26.176 lat (msec) : 2=0.01%, 4=0.11%, 10=96.77%, 20=3.11% 01:04:26.176 cpu : usr=65.87%, sys=27.80%, ctx=7, majf=0, minf=7 01:04:26.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:04:26.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:26.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:26.176 issued rwts: total=15501,15480,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:26.176 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:26.176 01:04:26.176 Run status group 0 (all jobs): 01:04:26.176 READ: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=60.6MiB (63.5MB), run=2008-2008msec 01:04:26.176 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.5MiB (63.4MB), run=2008-2008msec 01:04:26.176 10:59:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:04:26.176 10:59:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=4877651b-d82d-4af2-a926-e7a36c5470da 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 4877651b-d82d-4af2-a926-e7a36c5470da 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4877651b-d82d-4af2-a926-e7a36c5470da 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 01:04:26.176 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:04:26.435 { 01:04:26.435 "base_bdev": "Nvme0n1", 01:04:26.435 "block_size": 4096, 01:04:26.435 "cluster_size": 1073741824, 01:04:26.435 "free_clusters": 0, 01:04:26.435 "name": "lvs_0", 01:04:26.435 "total_data_clusters": 4, 01:04:26.435 "uuid": "a30b8fd1-23a9-4806-99ff-32205c63f7f7" 01:04:26.435 }, 01:04:26.435 { 01:04:26.435 "base_bdev": "2f96f939-d283-4215-87ce-bdd21bf95fab", 01:04:26.435 "block_size": 4096, 01:04:26.435 "cluster_size": 4194304, 01:04:26.435 "free_clusters": 1022, 01:04:26.435 "name": "lvs_n_0", 01:04:26.435 "total_data_clusters": 1022, 01:04:26.435 "uuid": "4877651b-d82d-4af2-a926-e7a36c5470da" 01:04:26.435 } 01:04:26.435 ]' 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4877651b-d82d-4af2-a926-e7a36c5470da") .free_clusters' 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4877651b-d82d-4af2-a926-e7a36c5470da") .cluster_size' 01:04:26.435 4088 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 01:04:26.435 10:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 01:04:26.693 786f9844-f77e-4e7f-8638-62b7f74d4a72 01:04:26.693 10:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 01:04:26.951 10:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 01:04:26.951 10:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:04:27.209 10:59:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:04:27.467 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:04:27.467 fio-3.35 01:04:27.467 Starting 1 thread 01:04:29.996 01:04:29.996 test: (groupid=0, jobs=1): err= 0: pid=106673: Mon Jul 22 10:59:37 2024 01:04:29.996 read: IOPS=6951, BW=27.2MiB/s (28.5MB/s)(54.5MiB/2008msec) 01:04:29.996 slat (nsec): min=1565, max=395958, avg=1975.32, stdev=4344.95 01:04:29.996 clat (usec): min=3740, max=15939, avg=9656.96, stdev=811.30 01:04:29.996 lat (usec): min=3752, max=15941, avg=9658.94, stdev=810.98 01:04:29.996 clat percentiles (usec): 01:04:29.996 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 01:04:29.996 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 01:04:29.996 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 01:04:29.996 | 99.00th=[11731], 99.50th=[12125], 99.90th=[14746], 99.95th=[15795], 01:04:29.996 | 99.99th=[15926] 01:04:29.996 bw ( KiB/s): min=27488, max=28264, per=99.96%, avg=27794.00, stdev=343.06, samples=4 01:04:29.996 iops : min= 6872, max= 7066, avg=6948.50, stdev=85.77, samples=4 01:04:29.996 write: IOPS=6959, BW=27.2MiB/s (28.5MB/s)(54.6MiB/2008msec); 0 zone resets 01:04:29.996 slat (nsec): min=1614, max=282985, avg=2076.35, stdev=2870.78 01:04:29.996 clat (usec): min=2812, max=15980, avg=8684.76, stdev=753.25 01:04:29.996 lat (usec): min=2827, max=15981, avg=8686.84, stdev=753.04 01:04:29.996 clat percentiles (usec): 01:04:29.996 | 1.00th=[ 7111], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8094], 01:04:29.996 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 01:04:29.996 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 01:04:29.996 | 99.00th=[10421], 99.50th=[10683], 99.90th=[14615], 99.95th=[14877], 01:04:29.996 | 99.99th=[15926] 01:04:29.996 bw ( KiB/s): min=27520, max=28424, per=99.93%, avg=27818.00, stdev=409.87, samples=4 01:04:29.996 iops : min= 6880, max= 7106, avg=6954.50, stdev=102.47, samples=4 01:04:29.996 lat (msec) : 4=0.06%, 10=83.26%, 20=16.68% 01:04:29.996 cpu : usr=68.26%, sys=26.21%, ctx=22, majf=0, minf=7 01:04:29.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:04:29.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:29.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:29.996 issued rwts: total=13958,13975,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:29.997 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:29.997 01:04:29.997 Run status group 0 (all jobs): 01:04:29.997 READ: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=54.5MiB (57.2MB), run=2008-2008msec 01:04:29.997 WRITE: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=54.6MiB (57.2MB), run=2008-2008msec 01:04:29.997 10:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:04:29.997 10:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 01:04:29.997 10:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 01:04:30.255 10:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 01:04:30.513 10:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 01:04:30.513 10:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 01:04:30.772 10:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:04:31.752 rmmod nvme_tcp 01:04:31.752 rmmod nvme_fabrics 01:04:31.752 rmmod nvme_keyring 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 106238 ']' 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 106238 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 106238 ']' 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 106238 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106238 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:04:31.752 killing process with pid 106238 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106238' 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 106238 01:04:31.752 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 106238 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:04:32.010 01:04:32.010 real 0m18.269s 01:04:32.010 user 1m18.374s 01:04:32.010 sys 0m4.934s 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:32.010 10:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:04:32.010 ************************************ 01:04:32.010 END TEST nvmf_fio_host 01:04:32.010 ************************************ 01:04:32.010 10:59:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:04:32.010 10:59:39 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:04:32.010 10:59:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:04:32.010 10:59:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:32.010 10:59:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:04:32.010 ************************************ 01:04:32.010 START TEST nvmf_failover 01:04:32.010 ************************************ 01:04:32.010 10:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:04:32.269 * Looking for test storage... 01:04:32.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:04:32.269 Cannot find device "nvmf_tgt_br" 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:04:32.269 Cannot find device "nvmf_tgt_br2" 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:04:32.269 Cannot find device "nvmf_tgt_br" 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 01:04:32.269 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:04:32.270 Cannot find device "nvmf_tgt_br2" 01:04:32.270 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 01:04:32.270 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:32.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:32.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:04:32.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:32.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 01:04:32.541 01:04:32.541 --- 10.0.0.2 ping statistics --- 01:04:32.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:32.541 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:04:32.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:32.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 01:04:32.541 01:04:32.541 --- 10.0.0.3 ping statistics --- 01:04:32.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:32.541 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:32.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:32.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 01:04:32.541 01:04:32.541 --- 10.0.0.1 ping statistics --- 01:04:32.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:32.541 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:04:32.541 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=106946 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 106946 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 106946 ']' 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:32.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:32.804 10:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:32.804 [2024-07-22 10:59:40.553806] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:04:32.804 [2024-07-22 10:59:40.553887] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:32.804 [2024-07-22 10:59:40.673302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:04:32.804 [2024-07-22 10:59:40.697863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:04:33.062 [2024-07-22 10:59:40.742983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:33.062 [2024-07-22 10:59:40.743034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:33.062 [2024-07-22 10:59:40.743043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:33.062 [2024-07-22 10:59:40.743051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:33.062 [2024-07-22 10:59:40.743058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:33.062 [2024-07-22 10:59:40.743311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:04:33.062 [2024-07-22 10:59:40.744160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:04:33.062 [2024-07-22 10:59:40.744161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:33.628 10:59:41 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:04:33.887 [2024-07-22 10:59:41.622366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:33.887 10:59:41 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:04:34.145 Malloc0 01:04:34.145 10:59:41 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:04:34.145 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:34.403 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:34.676 [2024-07-22 10:59:42.416804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:34.676 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:04:34.676 [2024-07-22 10:59:42.604630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:04:34.935 [2024-07-22 10:59:42.796503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=107057 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 107057 /var/tmp/bdevperf.sock 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107057 ']' 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:04:34.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:34.935 10:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:35.868 10:59:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:35.868 10:59:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:04:35.868 10:59:43 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:36.126 NVMe0n1 01:04:36.126 10:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:36.384 01:04:36.384 10:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=107104 01:04:36.384 10:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:04:36.384 10:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:04:37.757 10:59:45 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:37.757 [2024-07-22 10:59:45.512161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.757 [2024-07-22 10:59:45.512643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 [2024-07-22 10:59:45.512926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245fc40 is same with the state(5) to be set 01:04:37.758 10:59:45 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:04:41.090 10:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:41.090 01:04:41.090 10:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:04:41.090 [2024-07-22 10:59:49.018767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.018993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.090 [2024-07-22 10:59:49.019388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.091 [2024-07-22 10:59:49.019543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460ff0 is same with the state(5) to be set 01:04:41.349 10:59:49 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:04:44.630 10:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:44.630 [2024-07-22 10:59:52.238470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:44.630 10:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:04:45.566 10:59:53 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:04:45.566 [2024-07-22 10:59:53.449447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.449994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 [2024-07-22 10:59:53.450172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616d0 is same with the state(5) to be set 01:04:45.566 10:59:53 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 107104 01:04:52.129 0 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 107057 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107057 ']' 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107057 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107057 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:04:52.129 killing process with pid 107057 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107057' 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107057 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107057 01:04:52.129 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:04:52.129 [2024-07-22 10:59:42.872476] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:04:52.129 [2024-07-22 10:59:42.872569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107057 ] 01:04:52.129 [2024-07-22 10:59:42.993901] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:04:52.129 [2024-07-22 10:59:43.002930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:52.129 [2024-07-22 10:59:43.050596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:52.129 Running I/O for 15 seconds... 01:04:52.129 [2024-07-22 10:59:45.513142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.513974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.513987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.129 [2024-07-22 10:59:45.514760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.129 [2024-07-22 10:59:45.514772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.514975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.514988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.515966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.515981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.515994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.130 [2024-07-22 10:59:45.516578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.130 [2024-07-22 10:59:45.516738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.130 [2024-07-22 10:59:45.516752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1e5e0 is same with the state(5) to be set 01:04:52.130 [2024-07-22 10:59:45.516769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.130 [2024-07-22 10:59:45.516779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.130 [2024-07-22 10:59:45.516791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105608 len:8 PRP1 0x0 PRP2 0x0 01:04:52.130 [2024-07-22 10:59:45.516804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:45.516860] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa1e5e0 was disconnected and freed. reset controller. 01:04:52.131 [2024-07-22 10:59:45.516881] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 01:04:52.131 [2024-07-22 10:59:45.516932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.131 [2024-07-22 10:59:45.516947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:45.516962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.131 [2024-07-22 10:59:45.516974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:45.516987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.131 [2024-07-22 10:59:45.516999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:45.517012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.131 [2024-07-22 10:59:45.517025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:45.517037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:04:52.131 [2024-07-22 10:59:45.517084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8240 (9): Bad file descriptor 01:04:52.131 [2024-07-22 10:59:45.519876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:04:52.131 [2024-07-22 10:59:45.547709] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:04:52.131 [2024-07-22 10:59:49.019667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.019979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.019992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.131 [2024-07-22 10:59:49.020904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.020932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.020960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.020975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.020988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.131 [2024-07-22 10:59:49.021293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.131 [2024-07-22 10:59:49.021306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.021852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.021880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.021912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.021939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.021968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.021982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.021995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.022023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.022052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.022980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.022993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.023021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.023048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.023075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.023103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.023131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.132 [2024-07-22 10:59:49.023158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.023186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.023213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.023241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.023278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.132 [2024-07-22 10:59:49.023340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc52a0 is same with the state(5) to be set 01:04:52.132 [2024-07-22 10:59:49.023371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.132 [2024-07-22 10:59:49.023381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.132 [2024-07-22 10:59:49.023391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48960 len:8 PRP1 0x0 PRP2 0x0 01:04:52.132 [2024-07-22 10:59:49.023404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc52a0 was disconnected and freed. reset controller. 01:04:52.132 [2024-07-22 10:59:49.023479] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 01:04:52.132 [2024-07-22 10:59:49.023532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.132 [2024-07-22 10:59:49.023548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.132 [2024-07-22 10:59:49.023563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.132 [2024-07-22 10:59:49.023576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:49.023589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.133 [2024-07-22 10:59:49.023602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:49.023616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.133 [2024-07-22 10:59:49.023629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:49.023642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:04:52.133 [2024-07-22 10:59:49.026535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:04:52.133 [2024-07-22 10:59:49.026582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8240 (9): Bad file descriptor 01:04:52.133 [2024-07-22 10:59:49.062119] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:04:52.133 [2024-07-22 10:59:53.450990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.451442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.451987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.451999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:52.133 [2024-07-22 10:59:53.452215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.133 [2024-07-22 10:59:53.452875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.133 [2024-07-22 10:59:53.452894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.452906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.452919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.452931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.452945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.452957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.452983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.452996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:52.134 [2024-07-22 10:59:53.453946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.453976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.453986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86960 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.453998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86968 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86976 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86984 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86992 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87000 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.454656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.454669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.454677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.472102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.472146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.472173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.472185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.472199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.472216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.472233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.472247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.472260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 01:04:52.134 [2024-07-22 10:59:53.472292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.134 [2024-07-22 10:59:53.472309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.134 [2024-07-22 10:59:53.472321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.134 [2024-07-22 10:59:53.472333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 01:04:52.135 [2024-07-22 10:59:53.472349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.135 [2024-07-22 10:59:53.472366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:52.135 [2024-07-22 10:59:53.472379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:52.135 [2024-07-22 10:59:53.472391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 01:04:52.135 [2024-07-22 10:59:53.472407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.135 [2024-07-22 10:59:53.472478] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa1f1c0 was disconnected and freed. reset controller. 01:04:52.135 [2024-07-22 10:59:53.472497] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 01:04:52.135 [2024-07-22 10:59:53.472573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.135 [2024-07-22 10:59:53.472593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.135 [2024-07-22 10:59:53.472629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.135 [2024-07-22 10:59:53.472645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.135 [2024-07-22 10:59:53.472662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.135 [2024-07-22 10:59:53.472679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.135 [2024-07-22 10:59:53.472696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:52.135 [2024-07-22 10:59:53.472712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:52.135 [2024-07-22 10:59:53.472728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:04:52.135 [2024-07-22 10:59:53.472789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8240 (9): Bad file descriptor 01:04:52.135 [2024-07-22 10:59:53.477088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:04:52.135 [2024-07-22 10:59:53.508255] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:04:52.135 01:04:52.135 Latency(us) 01:04:52.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:52.135 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:04:52.135 Verification LBA range: start 0x0 length 0x4000 01:04:52.135 NVMe0n1 : 15.01 11637.89 45.46 261.00 0.00 10734.63 444.14 28214.70 01:04:52.135 =================================================================================================================== 01:04:52.135 Total : 11637.89 45.46 261.00 0.00 10734.63 444.14 28214.70 01:04:52.135 Received shutdown signal, test time was about 15.000000 seconds 01:04:52.135 01:04:52.135 Latency(us) 01:04:52.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:52.135 =================================================================================================================== 01:04:52.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=107302 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 107302 /var/tmp/bdevperf.sock 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107302 ']' 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:52.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:52.135 10:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:52.701 11:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:52.701 11:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:04:52.701 11:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:04:52.960 [2024-07-22 11:00:00.722642] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:04:52.960 11:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:04:53.219 [2024-07-22 11:00:00.914516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 01:04:53.219 11:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:53.477 NVMe0n1 01:04:53.477 11:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:53.734 01:04:53.734 11:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:53.992 01:04:53.992 11:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:04:53.992 11:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:54.249 11:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:54.249 11:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:04:57.538 11:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:57.538 11:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:04:57.538 11:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=107433 01:04:57.538 11:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:04:57.538 11:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 107433 01:04:58.912 0 01:04:58.912 11:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:04:58.912 [2024-07-22 10:59:59.690554] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:04:58.912 [2024-07-22 10:59:59.690636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107302 ] 01:04:58.912 [2024-07-22 10:59:59.810889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:04:58.912 [2024-07-22 10:59:59.819960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:58.912 [2024-07-22 10:59:59.867087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:58.912 [2024-07-22 11:00:02.141399] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 01:04:58.912 [2024-07-22 11:00:02.141495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:58.912 [2024-07-22 11:00:02.141514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:58.912 [2024-07-22 11:00:02.141530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:58.912 [2024-07-22 11:00:02.141542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:58.912 [2024-07-22 11:00:02.141555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:58.912 [2024-07-22 11:00:02.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:58.912 [2024-07-22 11:00:02.141579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:58.912 [2024-07-22 11:00:02.141591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:58.912 [2024-07-22 11:00:02.141603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:04:58.912 [2024-07-22 11:00:02.141636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:04:58.912 [2024-07-22 11:00:02.141657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc21240 (9): Bad file descriptor 01:04:58.912 [2024-07-22 11:00:02.151319] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:04:58.912 Running I/O for 1 seconds... 01:04:58.912 01:04:58.912 Latency(us) 01:04:58.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:58.912 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:04:58.912 Verification LBA range: start 0x0 length 0x4000 01:04:58.912 NVMe0n1 : 1.01 11775.77 46.00 0.00 0.00 10825.49 1579.18 13686.23 01:04:58.912 =================================================================================================================== 01:04:58.912 Total : 11775.77 46.00 0.00 0.00 10825.49 1579.18 13686.23 01:04:58.912 11:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:58.912 11:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:04:58.912 11:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:59.187 11:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:04:59.187 11:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:59.187 11:00:07 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:59.463 11:00:07 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 107302 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107302 ']' 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107302 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107302 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:05:02.754 killing process with pid 107302 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107302' 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107302 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107302 01:05:02.754 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 01:05:03.013 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:05:03.013 rmmod nvme_tcp 01:05:03.013 rmmod nvme_fabrics 01:05:03.013 rmmod nvme_keyring 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 106946 ']' 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 106946 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 106946 ']' 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 106946 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106946 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:05:03.271 killing process with pid 106946 01:05:03.271 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106946' 01:05:03.272 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 106946 01:05:03.272 11:00:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 106946 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:03.272 11:00:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:03.531 11:00:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:05:03.531 01:05:03.531 real 0m31.337s 01:05:03.531 user 1m59.837s 01:05:03.531 sys 0m5.370s 01:05:03.531 11:00:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:03.531 11:00:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:05:03.531 ************************************ 01:05:03.531 END TEST nvmf_failover 01:05:03.531 ************************************ 01:05:03.531 11:00:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:05:03.531 11:00:11 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:05:03.531 11:00:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:05:03.531 11:00:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:03.531 11:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:05:03.531 ************************************ 01:05:03.531 START TEST nvmf_host_discovery 01:05:03.531 ************************************ 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:05:03.531 * Looking for test storage... 01:05:03.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:05:03.531 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:05:03.789 Cannot find device "nvmf_tgt_br" 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:05:03.789 Cannot find device "nvmf_tgt_br2" 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:05:03.789 Cannot find device "nvmf_tgt_br" 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:05:03.789 Cannot find device "nvmf_tgt_br2" 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:03.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:03.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:03.789 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:05:04.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:04.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 01:05:04.046 01:05:04.046 --- 10.0.0.2 ping statistics --- 01:05:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:04.046 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:05:04.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:04.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 01:05:04.046 01:05:04.046 --- 10.0.0.3 ping statistics --- 01:05:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:04.046 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:04.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:04.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 01:05:04.046 01:05:04.046 --- 10.0.0.1 ping statistics --- 01:05:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:04.046 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107737 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107737 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107737 ']' 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:04.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:04.046 11:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:04.046 [2024-07-22 11:00:11.952709] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:05:04.046 [2024-07-22 11:00:11.952782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:04.304 [2024-07-22 11:00:12.071035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:05:04.304 [2024-07-22 11:00:12.094925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:04.304 [2024-07-22 11:00:12.140950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:04.304 [2024-07-22 11:00:12.141208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:04.304 [2024-07-22 11:00:12.141329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:04.304 [2024-07-22 11:00:12.141374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:04.304 [2024-07-22 11:00:12.141401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:04.304 [2024-07-22 11:00:12.141448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:05:04.871 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:04.871 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 01:05:04.871 11:00:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:05:04.871 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:04.871 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 [2024-07-22 11:00:12.862983] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 [2024-07-22 11:00:12.875082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 null0 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 null1 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=107787 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 107787 /tmp/host.sock 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107787 ']' 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:05:05.129 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:05.129 11:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:05.129 [2024-07-22 11:00:12.962036] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:05:05.129 [2024-07-22 11:00:12.962229] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107787 ] 01:05:05.456 [2024-07-22 11:00:13.081818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:05:05.456 [2024-07-22 11:00:13.090724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:05.456 [2024-07-22 11:00:13.135146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:06.048 11:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:06.304 11:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.304 [2024-07-22 11:00:14.173342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:06.304 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 01:05:06.562 11:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 01:05:07.129 [2024-07-22 11:00:14.862428] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:05:07.129 [2024-07-22 11:00:14.862452] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:05:07.129 [2024-07-22 11:00:14.862465] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:05:07.129 [2024-07-22 11:00:14.949391] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:05:07.129 [2024-07-22 11:00:15.006122] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:05:07.129 [2024-07-22 11:00:15.006161] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:07.695 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.952 [2024-07-22 11:00:15.715666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:05:07.952 [2024-07-22 11:00:15.716310] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:05:07.952 [2024-07-22 11:00:15.716333] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:05:07.952 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:07.953 [2024-07-22 11:00:15.804298] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:07.953 [2024-07-22 11:00:15.864554] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:05:07.953 [2024-07-22 11:00:15.864584] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:05:07.953 [2024-07-22 11:00:15.864590] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 01:05:07.953 11:00:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:05:09.328 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.329 [2024-07-22 11:00:16.983135] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:05:09.329 [2024-07-22 11:00:16.983163] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:05:09.329 [2024-07-22 11:00:16.990517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:05:09.329 [2024-07-22 11:00:16.990546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:09.329 [2024-07-22 11:00:16.990557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:05:09.329 [2024-07-22 11:00:16.990566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:09.329 [2024-07-22 11:00:16.990576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:05:09.329 [2024-07-22 11:00:16.990584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:09.329 [2024-07-22 11:00:16.990593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:05:09.329 [2024-07-22 11:00:16.990601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:09.329 [2024-07-22 11:00:16.990610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:09.329 11:00:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:09.329 [2024-07-22 11:00:17.000467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.329 [2024-07-22 11:00:17.010490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:09.329 [2024-07-22 11:00:17.010593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:09.329 [2024-07-22 11:00:17.010609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230cc40 with addr=10.0.0.2, port=4420 01:05:09.329 [2024-07-22 11:00:17.010619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.329 [2024-07-22 11:00:17.010632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.329 [2024-07-22 11:00:17.010645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:09.329 [2024-07-22 11:00:17.010653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:09.329 [2024-07-22 11:00:17.010664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:09.329 [2024-07-22 11:00:17.010676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.329 [2024-07-22 11:00:17.020526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:09.329 [2024-07-22 11:00:17.020598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:09.329 [2024-07-22 11:00:17.020612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230cc40 with addr=10.0.0.2, port=4420 01:05:09.329 [2024-07-22 11:00:17.020621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.329 [2024-07-22 11:00:17.020634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.329 [2024-07-22 11:00:17.020646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:09.329 [2024-07-22 11:00:17.020654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:09.329 [2024-07-22 11:00:17.020663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:09.329 [2024-07-22 11:00:17.020675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:09.329 [2024-07-22 11:00:17.030553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:09.329 [2024-07-22 11:00:17.030621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:09.329 [2024-07-22 11:00:17.030636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230cc40 with addr=10.0.0.2, port=4420 01:05:09.329 [2024-07-22 11:00:17.030646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.329 [2024-07-22 11:00:17.030660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.329 [2024-07-22 11:00:17.030671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:09.329 [2024-07-22 11:00:17.030679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:09.329 [2024-07-22 11:00:17.030687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:09.329 [2024-07-22 11:00:17.030699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:09.329 [2024-07-22 11:00:17.040581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:09.329 [2024-07-22 11:00:17.040639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:09.329 [2024-07-22 11:00:17.040652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230cc40 with addr=10.0.0.2, port=4420 01:05:09.329 [2024-07-22 11:00:17.040661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.329 [2024-07-22 11:00:17.040672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.329 [2024-07-22 11:00:17.040684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:09.329 [2024-07-22 11:00:17.040692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:09.329 [2024-07-22 11:00:17.040700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:09.329 [2024-07-22 11:00:17.040711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.329 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:09.329 [2024-07-22 11:00:17.050603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:09.329 [2024-07-22 11:00:17.050654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:09.330 [2024-07-22 11:00:17.050667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230cc40 with addr=10.0.0.2, port=4420 01:05:09.330 [2024-07-22 11:00:17.050676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.330 [2024-07-22 11:00:17.050688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.330 [2024-07-22 11:00:17.050699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:09.330 [2024-07-22 11:00:17.050707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:09.330 [2024-07-22 11:00:17.050715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:09.330 [2024-07-22 11:00:17.050726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:09.330 [2024-07-22 11:00:17.060621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:09.330 [2024-07-22 11:00:17.060682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:09.330 [2024-07-22 11:00:17.060696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230cc40 with addr=10.0.0.2, port=4420 01:05:09.330 [2024-07-22 11:00:17.060705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230cc40 is same with the state(5) to be set 01:05:09.330 [2024-07-22 11:00:17.060718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230cc40 (9): Bad file descriptor 01:05:09.330 [2024-07-22 11:00:17.060729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:09.330 [2024-07-22 11:00:17.060737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:09.330 [2024-07-22 11:00:17.060745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:09.330 [2024-07-22 11:00:17.060757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:09.330 [2024-07-22 11:00:17.069036] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 01:05:09.330 [2024-07-22 11:00:17.069058] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:05:09.330 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:09.589 11:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.524 [2024-07-22 11:00:18.357130] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:05:10.524 [2024-07-22 11:00:18.357165] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:05:10.524 [2024-07-22 11:00:18.357179] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:05:10.524 [2024-07-22 11:00:18.443074] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 01:05:10.783 [2024-07-22 11:00:18.502958] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:05:10.783 [2024-07-22 11:00:18.503004] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.783 2024/07/22 11:00:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:05:10.783 request: 01:05:10.783 { 01:05:10.783 "method": "bdev_nvme_start_discovery", 01:05:10.783 "params": { 01:05:10.783 "name": "nvme", 01:05:10.783 "trtype": "tcp", 01:05:10.783 "traddr": "10.0.0.2", 01:05:10.783 "adrfam": "ipv4", 01:05:10.783 "trsvcid": "8009", 01:05:10.783 "hostnqn": "nqn.2021-12.io.spdk:test", 01:05:10.783 "wait_for_attach": true 01:05:10.783 } 01:05:10.783 } 01:05:10.783 Got JSON-RPC error response 01:05:10.783 GoRPCClient: error on JSON-RPC call 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:10.783 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.784 2024/07/22 11:00:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:05:10.784 request: 01:05:10.784 { 01:05:10.784 "method": "bdev_nvme_start_discovery", 01:05:10.784 "params": { 01:05:10.784 "name": "nvme_second", 01:05:10.784 "trtype": "tcp", 01:05:10.784 "traddr": "10.0.0.2", 01:05:10.784 "adrfam": "ipv4", 01:05:10.784 "trsvcid": "8009", 01:05:10.784 "hostnqn": "nqn.2021-12.io.spdk:test", 01:05:10.784 "wait_for_attach": true 01:05:10.784 } 01:05:10.784 } 01:05:10.784 Got JSON-RPC error response 01:05:10.784 GoRPCClient: error on JSON-RPC call 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:05:10.784 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:11.042 11:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:11.978 [2024-07-22 11:00:19.738242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:11.978 [2024-07-22 11:00:19.738313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234abc0 with addr=10.0.0.2, port=8010 01:05:11.978 [2024-07-22 11:00:19.738334] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:05:11.978 [2024-07-22 11:00:19.738344] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:05:11.978 [2024-07-22 11:00:19.738352] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:05:12.911 [2024-07-22 11:00:20.736631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:05:12.912 [2024-07-22 11:00:20.736691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234abc0 with addr=10.0.0.2, port=8010 01:05:12.912 [2024-07-22 11:00:20.736712] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:05:12.912 [2024-07-22 11:00:20.736721] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:05:12.912 [2024-07-22 11:00:20.736730] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:05:13.848 [2024-07-22 11:00:21.734886] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 01:05:13.848 2024/07/22 11:00:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 01:05:13.848 request: 01:05:13.848 { 01:05:13.848 "method": "bdev_nvme_start_discovery", 01:05:13.848 "params": { 01:05:13.848 "name": "nvme_second", 01:05:13.848 "trtype": "tcp", 01:05:13.848 "traddr": "10.0.0.2", 01:05:13.848 "adrfam": "ipv4", 01:05:13.848 "trsvcid": "8010", 01:05:13.848 "hostnqn": "nqn.2021-12.io.spdk:test", 01:05:13.848 "wait_for_attach": false, 01:05:13.848 "attach_timeout_ms": 3000 01:05:13.848 } 01:05:13.848 } 01:05:13.848 Got JSON-RPC error response 01:05:13.848 GoRPCClient: error on JSON-RPC call 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:05:13.848 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 107787 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:05:14.106 rmmod nvme_tcp 01:05:14.106 rmmod nvme_fabrics 01:05:14.106 rmmod nvme_keyring 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107737 ']' 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107737 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 107737 ']' 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 107737 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107737 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:05:14.106 killing process with pid 107737 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107737' 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 107737 01:05:14.106 11:00:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 107737 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:05:14.364 01:05:14.364 real 0m10.893s 01:05:14.364 user 0m20.557s 01:05:14.364 sys 0m2.203s 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:05:14.364 ************************************ 01:05:14.364 END TEST nvmf_host_discovery 01:05:14.364 ************************************ 01:05:14.364 11:00:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:05:14.364 11:00:22 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:05:14.364 11:00:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:05:14.364 11:00:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:14.364 11:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:05:14.364 ************************************ 01:05:14.364 START TEST nvmf_host_multipath_status 01:05:14.364 ************************************ 01:05:14.364 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:05:14.622 * Looking for test storage... 01:05:14.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:14.622 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:05:14.623 Cannot find device "nvmf_tgt_br" 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:05:14.623 Cannot find device "nvmf_tgt_br2" 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:05:14.623 Cannot find device "nvmf_tgt_br" 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:05:14.623 Cannot find device "nvmf_tgt_br2" 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 01:05:14.623 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:14.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:14.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:14.881 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:05:15.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:15.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 01:05:15.139 01:05:15.139 --- 10.0.0.2 ping statistics --- 01:05:15.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:15.139 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:05:15.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:15.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 01:05:15.139 01:05:15.139 --- 10.0.0.3 ping statistics --- 01:05:15.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:15.139 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:15.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:15.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 01:05:15.139 01:05:15.139 --- 10.0.0.1 ping statistics --- 01:05:15.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:15.139 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=108273 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 108273 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108273 ']' 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:15.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:15.139 11:00:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:05:15.139 [2024-07-22 11:00:22.954108] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:05:15.139 [2024-07-22 11:00:22.954180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:15.396 [2024-07-22 11:00:23.073870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:05:15.396 [2024-07-22 11:00:23.097431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:05:15.396 [2024-07-22 11:00:23.137897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:15.396 [2024-07-22 11:00:23.137950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:15.396 [2024-07-22 11:00:23.137959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:15.396 [2024-07-22 11:00:23.137967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:15.396 [2024-07-22 11:00:23.137974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:15.396 [2024-07-22 11:00:23.138611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:05:15.396 [2024-07-22 11:00:23.138614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=108273 01:05:15.961 11:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:05:16.219 [2024-07-22 11:00:24.030325] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:16.219 11:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:05:16.477 Malloc0 01:05:16.477 11:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:05:16.744 11:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:05:17.003 11:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:05:17.003 [2024-07-22 11:00:24.850788] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:17.003 11:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:05:17.261 [2024-07-22 11:00:25.030583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=108372 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 108372 /var/tmp/bdevperf.sock 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108372 ']' 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:17.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:17.261 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:05:18.192 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:18.192 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 01:05:18.192 11:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:05:18.450 11:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 01:05:18.709 Nvme0n1 01:05:18.709 11:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:05:18.969 Nvme0n1 01:05:18.969 11:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:05:18.969 11:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:05:20.869 11:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:05:20.869 11:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:05:21.126 11:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:05:21.383 11:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:05:22.319 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:05:22.319 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:22.319 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:22.319 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:22.589 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:22.589 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:22.589 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:22.589 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:22.850 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:22.850 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:22.850 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:22.850 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:23.113 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:23.113 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:23.113 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:23.113 11:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:23.113 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:23.113 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:23.113 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:23.113 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:23.376 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:23.376 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:23.376 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:23.376 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:23.634 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:23.634 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:05:23.634 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:05:23.892 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:05:23.892 11:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:05:25.266 11:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:05:25.266 11:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:25.266 11:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:25.266 11:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:25.266 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:25.266 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:25.266 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:25.266 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:25.523 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:25.779 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:25.779 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:25.779 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:25.779 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:26.036 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:26.036 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:26.036 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:26.036 11:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:26.293 11:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:26.293 11:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:05:26.293 11:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:05:26.551 11:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:05:26.809 11:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:05:27.742 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:05:27.742 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:27.742 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:27.742 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:27.999 11:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:28.257 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:28.257 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:28.257 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:28.257 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:28.514 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:28.514 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:28.514 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:28.514 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:28.770 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:28.770 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:28.770 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:28.770 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:29.027 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:29.027 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:05:29.027 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:05:29.027 11:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:05:29.285 11:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:05:30.215 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:05:30.215 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:30.215 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:30.215 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:30.471 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:30.471 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:30.471 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:30.471 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:30.727 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:30.727 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:30.727 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:30.727 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:30.984 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:30.984 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:30.984 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:30.984 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:31.322 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:31.322 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:31.322 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:31.322 11:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:31.322 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:31.322 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:05:31.322 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:31.322 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:31.579 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:31.579 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:05:31.579 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:05:31.836 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:05:31.836 11:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:33.206 11:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:33.466 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:33.744 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:33.744 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:05:33.744 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:33.744 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:34.002 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:34.002 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:05:34.002 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:34.002 11:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:34.260 11:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:34.260 11:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:05:34.260 11:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:05:34.518 11:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:05:34.518 11:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:35.890 11:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:36.148 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:36.148 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:36.148 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:36.148 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:36.405 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:36.405 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:05:36.405 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:36.405 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:36.663 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:36.663 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:36.663 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:36.663 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:36.919 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:36.919 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:05:37.175 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:05:37.175 11:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:05:37.430 11:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:05:37.430 11:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:38.828 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:39.085 11:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:39.342 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:39.342 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:39.342 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:39.342 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:39.600 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:39.600 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:39.600 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:39.600 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:39.856 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:39.856 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:05:39.856 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:05:39.856 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:05:40.114 11:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:05:41.487 11:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:05:41.488 11:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:41.488 11:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:41.488 11:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:41.488 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:41.745 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:41.745 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:41.745 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:41.745 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:42.001 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:42.001 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:42.001 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:42.001 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:42.259 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:42.259 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:42.259 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:42.259 11:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:42.259 11:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:42.259 11:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:05:42.259 11:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:05:42.516 11:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:05:42.774 11:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:05:43.704 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:05:43.704 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:43.704 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:43.704 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:43.962 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:43.962 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:43.962 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:43.962 11:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:44.219 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:44.219 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:44.219 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:44.219 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:44.476 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:44.476 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:44.476 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:44.476 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:44.733 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:44.989 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:44.989 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:05:44.990 11:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:05:45.294 11:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:05:45.576 11:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:05:46.506 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:05:46.506 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:46.506 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:46.506 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:46.763 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:46.763 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:46.763 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:46.763 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:47.019 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:47.019 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:47.020 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:47.020 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:47.020 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:47.020 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:47.276 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:47.276 11:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:47.276 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:47.276 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:47.276 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:47.276 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:47.533 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:47.533 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:05:47.533 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:47.533 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 108372 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108372 ']' 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108372 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108372 01:05:47.791 killing process with pid 108372 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108372' 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108372 01:05:47.791 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108372 01:05:48.051 Connection closed with partial response: 01:05:48.051 01:05:48.051 01:05:48.051 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 108372 01:05:48.051 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:05:48.051 [2024-07-22 11:00:25.083922] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:05:48.051 [2024-07-22 11:00:25.084055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108372 ] 01:05:48.051 [2024-07-22 11:00:25.201799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:05:48.051 [2024-07-22 11:00:25.227088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:48.051 [2024-07-22 11:00:25.267984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:05:48.051 Running I/O for 90 seconds... 01:05:48.051 [2024-07-22 11:00:39.543620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.543920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.543932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.544441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.544464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.544485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.544499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.544534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.544546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.544564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.544577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.544595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.051 [2024-07-22 11:00:39.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:48.051 [2024-07-22 11:00:39.544625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.544978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.544997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.545980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.545998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:48.052 [2024-07-22 11:00:39.546747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.052 [2024-07-22 11:00:39.546759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.546793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.546826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.546861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.546895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.546929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.546964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.546985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.546998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.547032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:39.547065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.547969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.547990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:39.548580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:39.548593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.257174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.257233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.257291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.257308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.257327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.257341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.257360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.257373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.258994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.259365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.259395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.259424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.259454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.053 [2024-07-22 11:00:53.259491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.259658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.259670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.261759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:48.053 [2024-07-22 11:00:53.261813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.053 [2024-07-22 11:00:53.261825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.261843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.054 [2024-07-22 11:00:53.261855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.261873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.054 [2024-07-22 11:00:53.261885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.261903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.261915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.261943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.261956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.261973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.261985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.262015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.054 [2024-07-22 11:00:53.262045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:48.054 [2024-07-22 11:00:53.262075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.262105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.262134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.262164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:48.054 [2024-07-22 11:00:53.262182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:48.054 [2024-07-22 11:00:53.262194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:48.054 Received shutdown signal, test time was about 28.845241 seconds 01:05:48.054 01:05:48.054 Latency(us) 01:05:48.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:48.054 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:05:48.054 Verification LBA range: start 0x0 length 0x4000 01:05:48.054 Nvme0n1 : 28.84 11544.14 45.09 0.00 0.00 11067.50 368.48 3018551.31 01:05:48.054 =================================================================================================================== 01:05:48.054 Total : 11544.14 45.09 0.00 0.00 11067.50 368.48 3018551.31 01:05:48.054 11:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:05:48.311 rmmod nvme_tcp 01:05:48.311 rmmod nvme_fabrics 01:05:48.311 rmmod nvme_keyring 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 108273 ']' 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 108273 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108273 ']' 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108273 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108273 01:05:48.311 killing process with pid 108273 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108273' 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108273 01:05:48.311 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108273 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:05:48.569 01:05:48.569 real 0m34.209s 01:05:48.569 user 1m47.062s 01:05:48.569 sys 0m10.794s 01:05:48.569 ************************************ 01:05:48.569 END TEST nvmf_host_multipath_status 01:05:48.569 ************************************ 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:48.569 11:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:05:48.828 11:00:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:05:48.828 11:00:56 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:05:48.828 11:00:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:05:48.828 11:00:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:48.828 11:00:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:05:48.828 ************************************ 01:05:48.828 START TEST nvmf_discovery_remove_ifc 01:05:48.828 ************************************ 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:05:48.828 * Looking for test storage... 01:05:48.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:05:48.828 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:05:49.086 Cannot find device "nvmf_tgt_br" 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:05:49.086 Cannot find device "nvmf_tgt_br2" 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:05:49.086 Cannot find device "nvmf_tgt_br" 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:05:49.086 Cannot find device "nvmf_tgt_br2" 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:49.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:49.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:05:49.086 11:00:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:05:49.086 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:49.086 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:05:49.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:49.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 01:05:49.344 01:05:49.344 --- 10.0.0.2 ping statistics --- 01:05:49.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:49.344 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:05:49.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:49.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 01:05:49.344 01:05:49.344 --- 10.0.0.3 ping statistics --- 01:05:49.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:49.344 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:49.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:49.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:05:49.344 01:05:49.344 --- 10.0.0.1 ping statistics --- 01:05:49.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:49.344 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109627 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109627 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109627 ']' 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:49.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:49.344 11:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:49.344 [2024-07-22 11:00:57.187670] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:05:49.344 [2024-07-22 11:00:57.187743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:49.602 [2024-07-22 11:00:57.305877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:05:49.602 [2024-07-22 11:00:57.326669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:49.602 [2024-07-22 11:00:57.382002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:49.602 [2024-07-22 11:00:57.382047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:49.602 [2024-07-22 11:00:57.382059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:49.603 [2024-07-22 11:00:57.382069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:49.603 [2024-07-22 11:00:57.382078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:49.603 [2024-07-22 11:00:57.382105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:05:50.171 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:50.171 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 01:05:50.171 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:05:50.171 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:50.171 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:50.428 [2024-07-22 11:00:58.136553] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:50.428 [2024-07-22 11:00:58.144676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:05:50.428 null0 01:05:50.428 [2024-07-22 11:00:58.176603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109676 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109676 /tmp/host.sock 01:05:50.428 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109676 ']' 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:50.428 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:05:50.429 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:50.429 11:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:50.429 [2024-07-22 11:00:58.247875] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:05:50.429 [2024-07-22 11:00:58.248108] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109676 ] 01:05:50.685 [2024-07-22 11:00:58.365971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:05:50.685 [2024-07-22 11:00:58.389904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:50.685 [2024-07-22 11:00:58.435699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:51.259 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:51.515 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:51.516 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:05:51.516 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:51.516 11:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:52.445 [2024-07-22 11:01:00.215990] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:05:52.445 [2024-07-22 11:01:00.216022] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:05:52.445 [2024-07-22 11:01:00.216035] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:05:52.445 [2024-07-22 11:01:00.301951] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:05:52.445 [2024-07-22 11:01:00.358970] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:05:52.445 [2024-07-22 11:01:00.359031] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:05:52.445 [2024-07-22 11:01:00.359053] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:05:52.445 [2024-07-22 11:01:00.359068] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:05:52.445 [2024-07-22 11:01:00.359089] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:52.445 [2024-07-22 11:01:00.364401] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14173d0 was disconnected and freed. delete nvme_qpair. 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:52.445 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:52.701 11:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:53.630 11:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:55.001 11:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:55.968 11:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:56.899 11:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:57.833 11:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:58.091 [2024-07-22 11:01:05.788140] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:05:58.091 [2024-07-22 11:01:05.788202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:05:58.091 [2024-07-22 11:01:05.788231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:58.091 [2024-07-22 11:01:05.788243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:05:58.091 [2024-07-22 11:01:05.788251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:58.091 [2024-07-22 11:01:05.788260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:05:58.091 [2024-07-22 11:01:05.788269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:58.091 [2024-07-22 11:01:05.788278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:05:58.091 [2024-07-22 11:01:05.788294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:58.091 [2024-07-22 11:01:05.788303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:05:58.091 [2024-07-22 11:01:05.788312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:58.091 [2024-07-22 11:01:05.788321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13de0a0 is same with the state(5) to be set 01:05:58.091 [2024-07-22 11:01:05.798117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13de0a0 (9): Bad file descriptor 01:05:58.091 [2024-07-22 11:01:05.808122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:59.023 [2024-07-22 11:01:06.833347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 01:05:59.023 [2024-07-22 11:01:06.833498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13de0a0 with addr=10.0.0.2, port=4420 01:05:59.023 [2024-07-22 11:01:06.833545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13de0a0 is same with the state(5) to be set 01:05:59.023 [2024-07-22 11:01:06.833627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13de0a0 (9): Bad file descriptor 01:05:59.023 [2024-07-22 11:01:06.834702] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 01:05:59.023 [2024-07-22 11:01:06.834767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:59.023 [2024-07-22 11:01:06.834796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:59.023 [2024-07-22 11:01:06.834827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:59.023 [2024-07-22 11:01:06.834899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:59.023 [2024-07-22 11:01:06.834931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:59.023 11:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:59.955 [2024-07-22 11:01:07.833383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:59.955 [2024-07-22 11:01:07.833428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:59.955 [2024-07-22 11:01:07.833438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:59.955 [2024-07-22 11:01:07.833448] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 01:05:59.955 [2024-07-22 11:01:07.833467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:59.955 [2024-07-22 11:01:07.833490] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 01:05:59.955 [2024-07-22 11:01:07.833534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:05:59.955 [2024-07-22 11:01:07.833546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:59.955 [2024-07-22 11:01:07.833558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:05:59.955 [2024-07-22 11:01:07.833567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:59.955 [2024-07-22 11:01:07.833577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:05:59.955 [2024-07-22 11:01:07.833585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:59.955 [2024-07-22 11:01:07.833594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:05:59.955 [2024-07-22 11:01:07.833603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:59.955 [2024-07-22 11:01:07.833613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:05:59.955 [2024-07-22 11:01:07.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:59.955 [2024-07-22 11:01:07.833630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 01:05:59.955 [2024-07-22 11:01:07.834291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dd550 (9): Bad file descriptor 01:05:59.955 [2024-07-22 11:01:07.835299] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:05:59.955 [2024-07-22 11:01:07.835317] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:59.955 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:06:00.213 11:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:00.213 11:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:06:00.213 11:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:06:01.145 11:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:06:02.077 [2024-07-22 11:01:09.841766] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:06:02.077 [2024-07-22 11:01:09.841801] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:06:02.077 [2024-07-22 11:01:09.841815] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:06:02.077 [2024-07-22 11:01:09.927779] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 01:06:02.077 [2024-07-22 11:01:09.983587] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:06:02.077 [2024-07-22 11:01:09.983645] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:06:02.078 [2024-07-22 11:01:09.983663] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:06:02.078 [2024-07-22 11:01:09.983679] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 01:06:02.078 [2024-07-22 11:01:09.983689] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:06:02.078 [2024-07-22 11:01:09.990369] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13ce230 was disconnected and freed. delete nvme_qpair. 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109676 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109676 ']' 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109676 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109676 01:06:02.334 killing process with pid 109676 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109676' 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109676 01:06:02.334 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109676 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:02.591 rmmod nvme_tcp 01:06:02.591 rmmod nvme_fabrics 01:06:02.591 rmmod nvme_keyring 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109627 ']' 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109627 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109627 ']' 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109627 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109627 01:06:02.591 killing process with pid 109627 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109627' 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109627 01:06:02.591 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109627 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:02.849 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:03.107 ************************************ 01:06:03.107 END TEST nvmf_discovery_remove_ifc 01:06:03.107 ************************************ 01:06:03.107 01:06:03.107 real 0m14.244s 01:06:03.107 user 0m24.623s 01:06:03.107 sys 0m2.338s 01:06:03.107 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:03.107 11:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:06:03.107 11:01:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:03.107 11:01:10 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:06:03.107 11:01:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:03.107 11:01:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:03.107 11:01:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:03.107 ************************************ 01:06:03.107 START TEST nvmf_identify_kernel_target 01:06:03.107 ************************************ 01:06:03.107 11:01:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:06:03.107 * Looking for test storage... 01:06:03.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:03.107 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:03.108 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:03.366 Cannot find device "nvmf_tgt_br" 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:03.366 Cannot find device "nvmf_tgt_br2" 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:03.366 Cannot find device "nvmf_tgt_br" 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:03.366 Cannot find device "nvmf_tgt_br2" 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:03.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:03.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:03.366 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:03.624 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:03.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:03.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 01:06:03.624 01:06:03.625 --- 10.0.0.2 ping statistics --- 01:06:03.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:03.625 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:03.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:03.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 01:06:03.625 01:06:03.625 --- 10.0.0.3 ping statistics --- 01:06:03.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:03.625 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:03.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:03.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:06:03.625 01:06:03.625 --- 10.0.0.1 ping statistics --- 01:06:03.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:03.625 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:06:03.625 11:01:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:06:04.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:04.188 Waiting for block devices as requested 01:06:04.188 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:06:04.447 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:06:04.447 No valid GPT data, bailing 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:06:04.447 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:06:04.447 No valid GPT data, bailing 01:06:04.706 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:06:04.706 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:06:04.706 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:06:04.706 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:06:04.706 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:04.706 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:06:04.707 No valid GPT data, bailing 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:06:04.707 No valid GPT data, bailing 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -a 10.0.0.1 -t tcp -s 4420 01:06:04.707 01:06:04.707 Discovery Log Number of Records 2, Generation counter 2 01:06:04.707 =====Discovery Log Entry 0====== 01:06:04.707 trtype: tcp 01:06:04.707 adrfam: ipv4 01:06:04.707 subtype: current discovery subsystem 01:06:04.707 treq: not specified, sq flow control disable supported 01:06:04.707 portid: 1 01:06:04.707 trsvcid: 4420 01:06:04.707 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:06:04.707 traddr: 10.0.0.1 01:06:04.707 eflags: none 01:06:04.707 sectype: none 01:06:04.707 =====Discovery Log Entry 1====== 01:06:04.707 trtype: tcp 01:06:04.707 adrfam: ipv4 01:06:04.707 subtype: nvme subsystem 01:06:04.707 treq: not specified, sq flow control disable supported 01:06:04.707 portid: 1 01:06:04.707 trsvcid: 4420 01:06:04.707 subnqn: nqn.2016-06.io.spdk:testnqn 01:06:04.707 traddr: 10.0.0.1 01:06:04.707 eflags: none 01:06:04.707 sectype: none 01:06:04.707 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:06:04.707 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:06:04.966 ===================================================== 01:06:04.966 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:06:04.966 ===================================================== 01:06:04.966 Controller Capabilities/Features 01:06:04.966 ================================ 01:06:04.966 Vendor ID: 0000 01:06:04.966 Subsystem Vendor ID: 0000 01:06:04.966 Serial Number: cf3320d354106864b87d 01:06:04.966 Model Number: Linux 01:06:04.966 Firmware Version: 6.7.0-68 01:06:04.966 Recommended Arb Burst: 0 01:06:04.966 IEEE OUI Identifier: 00 00 00 01:06:04.966 Multi-path I/O 01:06:04.966 May have multiple subsystem ports: No 01:06:04.966 May have multiple controllers: No 01:06:04.966 Associated with SR-IOV VF: No 01:06:04.966 Max Data Transfer Size: Unlimited 01:06:04.966 Max Number of Namespaces: 0 01:06:04.966 Max Number of I/O Queues: 1024 01:06:04.966 NVMe Specification Version (VS): 1.3 01:06:04.966 NVMe Specification Version (Identify): 1.3 01:06:04.966 Maximum Queue Entries: 1024 01:06:04.966 Contiguous Queues Required: No 01:06:04.966 Arbitration Mechanisms Supported 01:06:04.966 Weighted Round Robin: Not Supported 01:06:04.966 Vendor Specific: Not Supported 01:06:04.966 Reset Timeout: 7500 ms 01:06:04.966 Doorbell Stride: 4 bytes 01:06:04.966 NVM Subsystem Reset: Not Supported 01:06:04.966 Command Sets Supported 01:06:04.966 NVM Command Set: Supported 01:06:04.966 Boot Partition: Not Supported 01:06:04.966 Memory Page Size Minimum: 4096 bytes 01:06:04.966 Memory Page Size Maximum: 4096 bytes 01:06:04.966 Persistent Memory Region: Not Supported 01:06:04.966 Optional Asynchronous Events Supported 01:06:04.966 Namespace Attribute Notices: Not Supported 01:06:04.966 Firmware Activation Notices: Not Supported 01:06:04.966 ANA Change Notices: Not Supported 01:06:04.966 PLE Aggregate Log Change Notices: Not Supported 01:06:04.966 LBA Status Info Alert Notices: Not Supported 01:06:04.966 EGE Aggregate Log Change Notices: Not Supported 01:06:04.966 Normal NVM Subsystem Shutdown event: Not Supported 01:06:04.966 Zone Descriptor Change Notices: Not Supported 01:06:04.966 Discovery Log Change Notices: Supported 01:06:04.966 Controller Attributes 01:06:04.966 128-bit Host Identifier: Not Supported 01:06:04.966 Non-Operational Permissive Mode: Not Supported 01:06:04.966 NVM Sets: Not Supported 01:06:04.966 Read Recovery Levels: Not Supported 01:06:04.966 Endurance Groups: Not Supported 01:06:04.966 Predictable Latency Mode: Not Supported 01:06:04.966 Traffic Based Keep ALive: Not Supported 01:06:04.966 Namespace Granularity: Not Supported 01:06:04.966 SQ Associations: Not Supported 01:06:04.966 UUID List: Not Supported 01:06:04.966 Multi-Domain Subsystem: Not Supported 01:06:04.966 Fixed Capacity Management: Not Supported 01:06:04.966 Variable Capacity Management: Not Supported 01:06:04.966 Delete Endurance Group: Not Supported 01:06:04.966 Delete NVM Set: Not Supported 01:06:04.966 Extended LBA Formats Supported: Not Supported 01:06:04.966 Flexible Data Placement Supported: Not Supported 01:06:04.966 01:06:04.966 Controller Memory Buffer Support 01:06:04.966 ================================ 01:06:04.966 Supported: No 01:06:04.966 01:06:04.966 Persistent Memory Region Support 01:06:04.966 ================================ 01:06:04.966 Supported: No 01:06:04.967 01:06:04.967 Admin Command Set Attributes 01:06:04.967 ============================ 01:06:04.967 Security Send/Receive: Not Supported 01:06:04.967 Format NVM: Not Supported 01:06:04.967 Firmware Activate/Download: Not Supported 01:06:04.967 Namespace Management: Not Supported 01:06:04.967 Device Self-Test: Not Supported 01:06:04.967 Directives: Not Supported 01:06:04.967 NVMe-MI: Not Supported 01:06:04.967 Virtualization Management: Not Supported 01:06:04.967 Doorbell Buffer Config: Not Supported 01:06:04.967 Get LBA Status Capability: Not Supported 01:06:04.967 Command & Feature Lockdown Capability: Not Supported 01:06:04.967 Abort Command Limit: 1 01:06:04.967 Async Event Request Limit: 1 01:06:04.967 Number of Firmware Slots: N/A 01:06:04.967 Firmware Slot 1 Read-Only: N/A 01:06:04.967 Firmware Activation Without Reset: N/A 01:06:04.967 Multiple Update Detection Support: N/A 01:06:04.967 Firmware Update Granularity: No Information Provided 01:06:04.967 Per-Namespace SMART Log: No 01:06:04.967 Asymmetric Namespace Access Log Page: Not Supported 01:06:04.967 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:06:04.967 Command Effects Log Page: Not Supported 01:06:04.967 Get Log Page Extended Data: Supported 01:06:04.967 Telemetry Log Pages: Not Supported 01:06:04.967 Persistent Event Log Pages: Not Supported 01:06:04.967 Supported Log Pages Log Page: May Support 01:06:04.967 Commands Supported & Effects Log Page: Not Supported 01:06:04.967 Feature Identifiers & Effects Log Page:May Support 01:06:04.967 NVMe-MI Commands & Effects Log Page: May Support 01:06:04.967 Data Area 4 for Telemetry Log: Not Supported 01:06:04.967 Error Log Page Entries Supported: 1 01:06:04.967 Keep Alive: Not Supported 01:06:04.967 01:06:04.967 NVM Command Set Attributes 01:06:04.967 ========================== 01:06:04.967 Submission Queue Entry Size 01:06:04.967 Max: 1 01:06:04.967 Min: 1 01:06:04.967 Completion Queue Entry Size 01:06:04.967 Max: 1 01:06:04.967 Min: 1 01:06:04.967 Number of Namespaces: 0 01:06:04.967 Compare Command: Not Supported 01:06:04.967 Write Uncorrectable Command: Not Supported 01:06:04.967 Dataset Management Command: Not Supported 01:06:04.967 Write Zeroes Command: Not Supported 01:06:04.967 Set Features Save Field: Not Supported 01:06:04.967 Reservations: Not Supported 01:06:04.967 Timestamp: Not Supported 01:06:04.967 Copy: Not Supported 01:06:04.967 Volatile Write Cache: Not Present 01:06:04.967 Atomic Write Unit (Normal): 1 01:06:04.967 Atomic Write Unit (PFail): 1 01:06:04.967 Atomic Compare & Write Unit: 1 01:06:04.967 Fused Compare & Write: Not Supported 01:06:04.967 Scatter-Gather List 01:06:04.967 SGL Command Set: Supported 01:06:04.967 SGL Keyed: Not Supported 01:06:04.967 SGL Bit Bucket Descriptor: Not Supported 01:06:04.967 SGL Metadata Pointer: Not Supported 01:06:04.967 Oversized SGL: Not Supported 01:06:04.967 SGL Metadata Address: Not Supported 01:06:04.967 SGL Offset: Supported 01:06:04.967 Transport SGL Data Block: Not Supported 01:06:04.967 Replay Protected Memory Block: Not Supported 01:06:04.967 01:06:04.967 Firmware Slot Information 01:06:04.967 ========================= 01:06:04.967 Active slot: 0 01:06:04.967 01:06:04.967 01:06:04.967 Error Log 01:06:04.967 ========= 01:06:04.967 01:06:04.967 Active Namespaces 01:06:04.967 ================= 01:06:04.967 Discovery Log Page 01:06:04.967 ================== 01:06:04.967 Generation Counter: 2 01:06:04.967 Number of Records: 2 01:06:04.967 Record Format: 0 01:06:04.967 01:06:04.967 Discovery Log Entry 0 01:06:04.967 ---------------------- 01:06:04.967 Transport Type: 3 (TCP) 01:06:04.967 Address Family: 1 (IPv4) 01:06:04.967 Subsystem Type: 3 (Current Discovery Subsystem) 01:06:04.967 Entry Flags: 01:06:04.967 Duplicate Returned Information: 0 01:06:04.967 Explicit Persistent Connection Support for Discovery: 0 01:06:04.967 Transport Requirements: 01:06:04.967 Secure Channel: Not Specified 01:06:04.967 Port ID: 1 (0x0001) 01:06:04.967 Controller ID: 65535 (0xffff) 01:06:04.967 Admin Max SQ Size: 32 01:06:04.967 Transport Service Identifier: 4420 01:06:04.967 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:06:04.967 Transport Address: 10.0.0.1 01:06:04.967 Discovery Log Entry 1 01:06:04.967 ---------------------- 01:06:04.967 Transport Type: 3 (TCP) 01:06:04.967 Address Family: 1 (IPv4) 01:06:04.967 Subsystem Type: 2 (NVM Subsystem) 01:06:04.967 Entry Flags: 01:06:04.967 Duplicate Returned Information: 0 01:06:04.967 Explicit Persistent Connection Support for Discovery: 0 01:06:04.967 Transport Requirements: 01:06:04.967 Secure Channel: Not Specified 01:06:04.967 Port ID: 1 (0x0001) 01:06:04.967 Controller ID: 65535 (0xffff) 01:06:04.967 Admin Max SQ Size: 32 01:06:04.967 Transport Service Identifier: 4420 01:06:04.967 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:06:04.967 Transport Address: 10.0.0.1 01:06:04.967 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:06:05.232 get_feature(0x01) failed 01:06:05.232 get_feature(0x02) failed 01:06:05.232 get_feature(0x04) failed 01:06:05.232 ===================================================== 01:06:05.232 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:06:05.232 ===================================================== 01:06:05.232 Controller Capabilities/Features 01:06:05.232 ================================ 01:06:05.232 Vendor ID: 0000 01:06:05.232 Subsystem Vendor ID: 0000 01:06:05.232 Serial Number: ea47cc5ec15513dd32d0 01:06:05.232 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:06:05.232 Firmware Version: 6.7.0-68 01:06:05.232 Recommended Arb Burst: 6 01:06:05.232 IEEE OUI Identifier: 00 00 00 01:06:05.232 Multi-path I/O 01:06:05.232 May have multiple subsystem ports: Yes 01:06:05.232 May have multiple controllers: Yes 01:06:05.232 Associated with SR-IOV VF: No 01:06:05.232 Max Data Transfer Size: Unlimited 01:06:05.232 Max Number of Namespaces: 1024 01:06:05.232 Max Number of I/O Queues: 128 01:06:05.232 NVMe Specification Version (VS): 1.3 01:06:05.232 NVMe Specification Version (Identify): 1.3 01:06:05.232 Maximum Queue Entries: 1024 01:06:05.232 Contiguous Queues Required: No 01:06:05.232 Arbitration Mechanisms Supported 01:06:05.232 Weighted Round Robin: Not Supported 01:06:05.232 Vendor Specific: Not Supported 01:06:05.232 Reset Timeout: 7500 ms 01:06:05.232 Doorbell Stride: 4 bytes 01:06:05.232 NVM Subsystem Reset: Not Supported 01:06:05.232 Command Sets Supported 01:06:05.232 NVM Command Set: Supported 01:06:05.232 Boot Partition: Not Supported 01:06:05.232 Memory Page Size Minimum: 4096 bytes 01:06:05.232 Memory Page Size Maximum: 4096 bytes 01:06:05.232 Persistent Memory Region: Not Supported 01:06:05.232 Optional Asynchronous Events Supported 01:06:05.232 Namespace Attribute Notices: Supported 01:06:05.232 Firmware Activation Notices: Not Supported 01:06:05.232 ANA Change Notices: Supported 01:06:05.232 PLE Aggregate Log Change Notices: Not Supported 01:06:05.232 LBA Status Info Alert Notices: Not Supported 01:06:05.232 EGE Aggregate Log Change Notices: Not Supported 01:06:05.232 Normal NVM Subsystem Shutdown event: Not Supported 01:06:05.232 Zone Descriptor Change Notices: Not Supported 01:06:05.232 Discovery Log Change Notices: Not Supported 01:06:05.232 Controller Attributes 01:06:05.232 128-bit Host Identifier: Supported 01:06:05.232 Non-Operational Permissive Mode: Not Supported 01:06:05.232 NVM Sets: Not Supported 01:06:05.232 Read Recovery Levels: Not Supported 01:06:05.232 Endurance Groups: Not Supported 01:06:05.232 Predictable Latency Mode: Not Supported 01:06:05.232 Traffic Based Keep ALive: Supported 01:06:05.232 Namespace Granularity: Not Supported 01:06:05.232 SQ Associations: Not Supported 01:06:05.232 UUID List: Not Supported 01:06:05.232 Multi-Domain Subsystem: Not Supported 01:06:05.232 Fixed Capacity Management: Not Supported 01:06:05.232 Variable Capacity Management: Not Supported 01:06:05.232 Delete Endurance Group: Not Supported 01:06:05.232 Delete NVM Set: Not Supported 01:06:05.232 Extended LBA Formats Supported: Not Supported 01:06:05.232 Flexible Data Placement Supported: Not Supported 01:06:05.232 01:06:05.232 Controller Memory Buffer Support 01:06:05.232 ================================ 01:06:05.232 Supported: No 01:06:05.232 01:06:05.232 Persistent Memory Region Support 01:06:05.232 ================================ 01:06:05.232 Supported: No 01:06:05.232 01:06:05.232 Admin Command Set Attributes 01:06:05.232 ============================ 01:06:05.232 Security Send/Receive: Not Supported 01:06:05.232 Format NVM: Not Supported 01:06:05.232 Firmware Activate/Download: Not Supported 01:06:05.232 Namespace Management: Not Supported 01:06:05.232 Device Self-Test: Not Supported 01:06:05.232 Directives: Not Supported 01:06:05.232 NVMe-MI: Not Supported 01:06:05.232 Virtualization Management: Not Supported 01:06:05.232 Doorbell Buffer Config: Not Supported 01:06:05.232 Get LBA Status Capability: Not Supported 01:06:05.232 Command & Feature Lockdown Capability: Not Supported 01:06:05.232 Abort Command Limit: 4 01:06:05.232 Async Event Request Limit: 4 01:06:05.232 Number of Firmware Slots: N/A 01:06:05.233 Firmware Slot 1 Read-Only: N/A 01:06:05.233 Firmware Activation Without Reset: N/A 01:06:05.233 Multiple Update Detection Support: N/A 01:06:05.233 Firmware Update Granularity: No Information Provided 01:06:05.233 Per-Namespace SMART Log: Yes 01:06:05.233 Asymmetric Namespace Access Log Page: Supported 01:06:05.233 ANA Transition Time : 10 sec 01:06:05.233 01:06:05.233 Asymmetric Namespace Access Capabilities 01:06:05.233 ANA Optimized State : Supported 01:06:05.233 ANA Non-Optimized State : Supported 01:06:05.233 ANA Inaccessible State : Supported 01:06:05.233 ANA Persistent Loss State : Supported 01:06:05.233 ANA Change State : Supported 01:06:05.233 ANAGRPID is not changed : No 01:06:05.233 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:06:05.233 01:06:05.233 ANA Group Identifier Maximum : 128 01:06:05.233 Number of ANA Group Identifiers : 128 01:06:05.233 Max Number of Allowed Namespaces : 1024 01:06:05.233 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:06:05.233 Command Effects Log Page: Supported 01:06:05.233 Get Log Page Extended Data: Supported 01:06:05.233 Telemetry Log Pages: Not Supported 01:06:05.233 Persistent Event Log Pages: Not Supported 01:06:05.233 Supported Log Pages Log Page: May Support 01:06:05.233 Commands Supported & Effects Log Page: Not Supported 01:06:05.233 Feature Identifiers & Effects Log Page:May Support 01:06:05.233 NVMe-MI Commands & Effects Log Page: May Support 01:06:05.233 Data Area 4 for Telemetry Log: Not Supported 01:06:05.233 Error Log Page Entries Supported: 128 01:06:05.233 Keep Alive: Supported 01:06:05.233 Keep Alive Granularity: 1000 ms 01:06:05.233 01:06:05.233 NVM Command Set Attributes 01:06:05.233 ========================== 01:06:05.233 Submission Queue Entry Size 01:06:05.233 Max: 64 01:06:05.233 Min: 64 01:06:05.233 Completion Queue Entry Size 01:06:05.233 Max: 16 01:06:05.233 Min: 16 01:06:05.233 Number of Namespaces: 1024 01:06:05.233 Compare Command: Not Supported 01:06:05.233 Write Uncorrectable Command: Not Supported 01:06:05.233 Dataset Management Command: Supported 01:06:05.233 Write Zeroes Command: Supported 01:06:05.233 Set Features Save Field: Not Supported 01:06:05.233 Reservations: Not Supported 01:06:05.233 Timestamp: Not Supported 01:06:05.233 Copy: Not Supported 01:06:05.233 Volatile Write Cache: Present 01:06:05.233 Atomic Write Unit (Normal): 1 01:06:05.233 Atomic Write Unit (PFail): 1 01:06:05.233 Atomic Compare & Write Unit: 1 01:06:05.233 Fused Compare & Write: Not Supported 01:06:05.233 Scatter-Gather List 01:06:05.233 SGL Command Set: Supported 01:06:05.233 SGL Keyed: Not Supported 01:06:05.233 SGL Bit Bucket Descriptor: Not Supported 01:06:05.233 SGL Metadata Pointer: Not Supported 01:06:05.233 Oversized SGL: Not Supported 01:06:05.233 SGL Metadata Address: Not Supported 01:06:05.233 SGL Offset: Supported 01:06:05.233 Transport SGL Data Block: Not Supported 01:06:05.233 Replay Protected Memory Block: Not Supported 01:06:05.233 01:06:05.233 Firmware Slot Information 01:06:05.233 ========================= 01:06:05.233 Active slot: 0 01:06:05.233 01:06:05.233 Asymmetric Namespace Access 01:06:05.233 =========================== 01:06:05.233 Change Count : 0 01:06:05.233 Number of ANA Group Descriptors : 1 01:06:05.233 ANA Group Descriptor : 0 01:06:05.233 ANA Group ID : 1 01:06:05.233 Number of NSID Values : 1 01:06:05.233 Change Count : 0 01:06:05.233 ANA State : 1 01:06:05.233 Namespace Identifier : 1 01:06:05.233 01:06:05.233 Commands Supported and Effects 01:06:05.233 ============================== 01:06:05.233 Admin Commands 01:06:05.233 -------------- 01:06:05.233 Get Log Page (02h): Supported 01:06:05.233 Identify (06h): Supported 01:06:05.233 Abort (08h): Supported 01:06:05.233 Set Features (09h): Supported 01:06:05.233 Get Features (0Ah): Supported 01:06:05.233 Asynchronous Event Request (0Ch): Supported 01:06:05.233 Keep Alive (18h): Supported 01:06:05.233 I/O Commands 01:06:05.233 ------------ 01:06:05.233 Flush (00h): Supported 01:06:05.233 Write (01h): Supported LBA-Change 01:06:05.233 Read (02h): Supported 01:06:05.233 Write Zeroes (08h): Supported LBA-Change 01:06:05.233 Dataset Management (09h): Supported 01:06:05.233 01:06:05.233 Error Log 01:06:05.233 ========= 01:06:05.233 Entry: 0 01:06:05.233 Error Count: 0x3 01:06:05.233 Submission Queue Id: 0x0 01:06:05.233 Command Id: 0x5 01:06:05.233 Phase Bit: 0 01:06:05.233 Status Code: 0x2 01:06:05.233 Status Code Type: 0x0 01:06:05.233 Do Not Retry: 1 01:06:05.233 Error Location: 0x28 01:06:05.233 LBA: 0x0 01:06:05.233 Namespace: 0x0 01:06:05.233 Vendor Log Page: 0x0 01:06:05.233 ----------- 01:06:05.233 Entry: 1 01:06:05.233 Error Count: 0x2 01:06:05.233 Submission Queue Id: 0x0 01:06:05.233 Command Id: 0x5 01:06:05.233 Phase Bit: 0 01:06:05.233 Status Code: 0x2 01:06:05.234 Status Code Type: 0x0 01:06:05.234 Do Not Retry: 1 01:06:05.234 Error Location: 0x28 01:06:05.234 LBA: 0x0 01:06:05.234 Namespace: 0x0 01:06:05.234 Vendor Log Page: 0x0 01:06:05.234 ----------- 01:06:05.234 Entry: 2 01:06:05.234 Error Count: 0x1 01:06:05.234 Submission Queue Id: 0x0 01:06:05.234 Command Id: 0x4 01:06:05.234 Phase Bit: 0 01:06:05.234 Status Code: 0x2 01:06:05.234 Status Code Type: 0x0 01:06:05.234 Do Not Retry: 1 01:06:05.234 Error Location: 0x28 01:06:05.234 LBA: 0x0 01:06:05.234 Namespace: 0x0 01:06:05.234 Vendor Log Page: 0x0 01:06:05.234 01:06:05.234 Number of Queues 01:06:05.234 ================ 01:06:05.234 Number of I/O Submission Queues: 128 01:06:05.234 Number of I/O Completion Queues: 128 01:06:05.234 01:06:05.234 ZNS Specific Controller Data 01:06:05.234 ============================ 01:06:05.234 Zone Append Size Limit: 0 01:06:05.234 01:06:05.234 01:06:05.234 Active Namespaces 01:06:05.234 ================= 01:06:05.234 get_feature(0x05) failed 01:06:05.234 Namespace ID:1 01:06:05.234 Command Set Identifier: NVM (00h) 01:06:05.234 Deallocate: Supported 01:06:05.234 Deallocated/Unwritten Error: Not Supported 01:06:05.234 Deallocated Read Value: Unknown 01:06:05.234 Deallocate in Write Zeroes: Not Supported 01:06:05.234 Deallocated Guard Field: 0xFFFF 01:06:05.234 Flush: Supported 01:06:05.234 Reservation: Not Supported 01:06:05.234 Namespace Sharing Capabilities: Multiple Controllers 01:06:05.234 Size (in LBAs): 1310720 (5GiB) 01:06:05.234 Capacity (in LBAs): 1310720 (5GiB) 01:06:05.234 Utilization (in LBAs): 1310720 (5GiB) 01:06:05.234 UUID: 9541d22a-9afb-47dc-b39d-f107be391cdb 01:06:05.234 Thin Provisioning: Not Supported 01:06:05.234 Per-NS Atomic Units: Yes 01:06:05.234 Atomic Boundary Size (Normal): 0 01:06:05.234 Atomic Boundary Size (PFail): 0 01:06:05.234 Atomic Boundary Offset: 0 01:06:05.234 NGUID/EUI64 Never Reused: No 01:06:05.234 ANA group ID: 1 01:06:05.234 Namespace Write Protected: No 01:06:05.234 Number of LBA Formats: 1 01:06:05.234 Current LBA Format: LBA Format #00 01:06:05.234 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:06:05.234 01:06:05.234 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:06:05.234 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:05.234 11:01:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:05.234 rmmod nvme_tcp 01:06:05.234 rmmod nvme_fabrics 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 01:06:05.234 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:06:05.499 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:06:05.499 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:06:05.499 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:06:05.499 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:06:05.499 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:06:05.499 11:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:06:06.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:06.431 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:06:06.431 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:06:06.431 ************************************ 01:06:06.431 END TEST nvmf_identify_kernel_target 01:06:06.431 ************************************ 01:06:06.431 01:06:06.431 real 0m3.402s 01:06:06.431 user 0m1.102s 01:06:06.431 sys 0m1.838s 01:06:06.431 11:01:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:06.431 11:01:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:06:06.431 11:01:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:06.431 11:01:14 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:06:06.431 11:01:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:06.431 11:01:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:06.431 11:01:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:06.431 ************************************ 01:06:06.431 START TEST nvmf_auth_host 01:06:06.431 ************************************ 01:06:06.431 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:06:06.689 * Looking for test storage... 01:06:06.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:06.689 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:06.690 Cannot find device "nvmf_tgt_br" 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:06.690 Cannot find device "nvmf_tgt_br2" 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:06.690 Cannot find device "nvmf_tgt_br" 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:06.690 Cannot find device "nvmf_tgt_br2" 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 01:06:06.690 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:06.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:06.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:06.947 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:06.948 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:07.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:07.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 01:06:07.206 01:06:07.206 --- 10.0.0.2 ping statistics --- 01:06:07.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:07.206 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:07.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:07.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 01:06:07.206 01:06:07.206 --- 10.0.0.3 ping statistics --- 01:06:07.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:07.206 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:07.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:07.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 01:06:07.206 01:06:07.206 --- 10.0.0.1 ping statistics --- 01:06:07.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:07.206 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=110578 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 110578 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110578 ']' 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:07.206 11:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3023a57517d39026c9aa63ea34efad0a 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ijL 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3023a57517d39026c9aa63ea34efad0a 0 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3023a57517d39026c9aa63ea34efad0a 0 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3023a57517d39026c9aa63ea34efad0a 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ijL 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ijL 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ijL 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=106d0d703e1f16924c7c5e659689d329482ce47304379d9a35761934c1a77822 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1PS 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 106d0d703e1f16924c7c5e659689d329482ce47304379d9a35761934c1a77822 3 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 106d0d703e1f16924c7c5e659689d329482ce47304379d9a35761934c1a77822 3 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=106d0d703e1f16924c7c5e659689d329482ce47304379d9a35761934c1a77822 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 01:06:08.142 11:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.142 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1PS 01:06:08.142 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1PS 01:06:08.142 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1PS 01:06:08.142 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=416dc17ccb5eb642142b7738715564cc16c583ae48a91f40 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.45z 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 416dc17ccb5eb642142b7738715564cc16c583ae48a91f40 0 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 416dc17ccb5eb642142b7738715564cc16c583ae48a91f40 0 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=416dc17ccb5eb642142b7738715564cc16c583ae48a91f40 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:06:08.143 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.45z 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.45z 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.45z 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3da018c995b45f450f85301ee0c8024766cde33fd7fcd065 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DWo 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3da018c995b45f450f85301ee0c8024766cde33fd7fcd065 2 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3da018c995b45f450f85301ee0c8024766cde33fd7fcd065 2 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3da018c995b45f450f85301ee0c8024766cde33fd7fcd065 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DWo 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DWo 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DWo 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce7145dcd4e4da3d16481ba8c8e662da 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7wf 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce7145dcd4e4da3d16481ba8c8e662da 1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce7145dcd4e4da3d16481ba8c8e662da 1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce7145dcd4e4da3d16481ba8c8e662da 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7wf 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7wf 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7wf 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9bb295f78ac3ff739780bac5a77e83e9 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NN4 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9bb295f78ac3ff739780bac5a77e83e9 1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9bb295f78ac3ff739780bac5a77e83e9 1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9bb295f78ac3ff739780bac5a77e83e9 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NN4 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NN4 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.NN4 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=757ecb62b771e7ad72696ad7aa93946676de7b8b7ec32d67 01:06:08.402 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Put 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 757ecb62b771e7ad72696ad7aa93946676de7b8b7ec32d67 2 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 757ecb62b771e7ad72696ad7aa93946676de7b8b7ec32d67 2 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=757ecb62b771e7ad72696ad7aa93946676de7b8b7ec32d67 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Put 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Put 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Put 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f8799aa784bb6e409628cfbb2835e48 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.17j 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f8799aa784bb6e409628cfbb2835e48 0 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f8799aa784bb6e409628cfbb2835e48 0 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f8799aa784bb6e409628cfbb2835e48 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.17j 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.17j 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.17j 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dde527e490037cfbfc85c68ea78193a5fc0c6d4cc8c4538fcf4254d84f956dc3 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eN1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dde527e490037cfbfc85c68ea78193a5fc0c6d4cc8c4538fcf4254d84f956dc3 3 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dde527e490037cfbfc85c68ea78193a5fc0c6d4cc8c4538fcf4254d84f956dc3 3 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dde527e490037cfbfc85c68ea78193a5fc0c6d4cc8c4538fcf4254d84f956dc3 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eN1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eN1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eN1 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 110578 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110578 ']' 01:06:08.663 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:08.664 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:08.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:08.664 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:08.664 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:08.664 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ijL 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1PS ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1PS 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.45z 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DWo ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DWo 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7wf 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.NN4 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NN4 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Put 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.17j ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.17j 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eN1 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:06:08.925 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 01:06:09.184 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:06:09.184 11:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:06:09.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:09.750 Waiting for block devices as requested 01:06:09.750 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:06:09.750 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:06:10.687 No valid GPT data, bailing 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:06:10.687 No valid GPT data, bailing 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:06:10.687 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:06:10.947 No valid GPT data, bailing 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:06:10.947 No valid GPT data, bailing 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -a 10.0.0.1 -t tcp -s 4420 01:06:10.947 01:06:10.947 Discovery Log Number of Records 2, Generation counter 2 01:06:10.947 =====Discovery Log Entry 0====== 01:06:10.947 trtype: tcp 01:06:10.947 adrfam: ipv4 01:06:10.947 subtype: current discovery subsystem 01:06:10.947 treq: not specified, sq flow control disable supported 01:06:10.947 portid: 1 01:06:10.947 trsvcid: 4420 01:06:10.947 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:06:10.947 traddr: 10.0.0.1 01:06:10.947 eflags: none 01:06:10.947 sectype: none 01:06:10.947 =====Discovery Log Entry 1====== 01:06:10.947 trtype: tcp 01:06:10.947 adrfam: ipv4 01:06:10.947 subtype: nvme subsystem 01:06:10.947 treq: not specified, sq flow control disable supported 01:06:10.947 portid: 1 01:06:10.947 trsvcid: 4420 01:06:10.947 subnqn: nqn.2024-02.io.spdk:cnode0 01:06:10.947 traddr: 10.0.0.1 01:06:10.947 eflags: none 01:06:10.947 sectype: none 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:10.947 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.207 11:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.207 nvme0n1 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.207 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.465 nvme0n1 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.465 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.466 nvme0n1 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.466 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.724 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.724 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.725 nvme0n1 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.725 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.984 nvme0n1 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.984 nvme0n1 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.984 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.241 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.241 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:12.242 11:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:12.500 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:12.500 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:12.500 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:12.500 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:06:12.500 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.501 nvme0n1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.501 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.761 nvme0n1 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.761 nvme0n1 01:06:12.761 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:13.020 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.021 nvme0n1 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.021 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.280 11:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.280 nvme0n1 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:13.280 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.848 nvme0n1 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:13.848 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.107 11:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.107 nvme0n1 01:06:14.107 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.107 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.107 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.107 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.107 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.107 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.366 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.366 nvme0n1 01:06:14.367 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.367 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.367 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.367 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.367 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.367 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.626 nvme0n1 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.626 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:14.886 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.887 nvme0n1 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.887 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:15.145 11:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:16.584 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.843 nvme0n1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:16.843 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.101 nvme0n1 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:17.101 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:17.102 11:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.102 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.667 nvme0n1 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:17.667 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:17.668 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:17.668 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:17.668 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:17.668 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.668 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.925 nvme0n1 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:17.925 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:17.926 11:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.187 nvme0n1 01:06:18.187 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.188 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.446 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.703 nvme0n1 01:06:18.704 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:18.962 11:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.529 nvme0n1 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:19.529 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.097 nvme0n1 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.097 11:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.665 nvme0n1 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:20.665 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:20.666 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.234 nvme0n1 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.234 11:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.234 nvme0n1 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.234 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.494 nvme0n1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.494 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.754 nvme0n1 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.754 nvme0n1 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:21.754 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.013 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.014 nvme0n1 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.014 11:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.272 nvme0n1 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:22.272 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.273 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.532 nvme0n1 01:06:22.532 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.532 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:22.532 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:22.532 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.532 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.533 nvme0n1 01:06:22.533 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:06:22.792 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.793 nvme0n1 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:22.793 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.052 nvme0n1 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:23.052 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.053 11:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.419 nvme0n1 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:23.419 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.420 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.679 nvme0n1 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:23.679 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.680 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.940 nvme0n1 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:23.940 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.198 nvme0n1 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:24.198 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.199 11:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.457 nvme0n1 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.457 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.716 nvme0n1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:24.716 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.280 nvme0n1 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.280 11:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.539 nvme0n1 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.539 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.797 nvme0n1 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:25.797 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.054 11:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.312 nvme0n1 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.312 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.879 nvme0n1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:26.879 11:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:27.444 nvme0n1 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:27.444 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:27.445 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.012 nvme0n1 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.012 11:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.580 nvme0n1 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:28.580 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:28.581 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.148 nvme0n1 01:06:29.148 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.148 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:29.148 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.148 11:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:29.148 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.148 11:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.148 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.406 nvme0n1 01:06:29.406 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.406 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:29.406 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:29.406 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.406 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.407 nvme0n1 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.407 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:29.667 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.668 nvme0n1 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.668 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.930 nvme0n1 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.930 nvme0n1 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:29.930 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.190 11:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.190 nvme0n1 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:30.190 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.191 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.450 nvme0n1 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.450 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.709 nvme0n1 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:30.709 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.710 nvme0n1 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.710 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:30.968 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.969 nvme0n1 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:30.969 11:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.228 nvme0n1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.228 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.487 nvme0n1 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.487 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.744 nvme0n1 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:31.744 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:31.745 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:31.745 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:31.745 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:31.745 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.003 nvme0n1 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:32.003 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.004 11:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.262 nvme0n1 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.262 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.521 nvme0n1 01:06:32.521 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.521 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:32.521 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.521 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:32.521 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.521 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:32.779 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.036 nvme0n1 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.036 11:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.295 nvme0n1 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.295 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.863 nvme0n1 01:06:33.863 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.863 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:33.863 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:33.863 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.863 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:33.864 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.123 nvme0n1 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyM2E1NzUxN2QzOTAyNmM5YWE2M2VhMzRlZmFkMGHyQseD: 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTA2ZDBkNzAzZTFmMTY5MjRjN2M1ZTY1OTY4OWQzMjk0ODJjZTQ3MzA0Mzc5ZDlhMzU3NjE5MzRjMWE3NzgyMtzCtyQ=: 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.123 11:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.691 nvme0n1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:34.691 11:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.257 nvme0n1 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3MTQ1ZGNkNGU0ZGEzZDE2NDgxYmE4YzhlNjYyZGESrqgV: 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJiMjk1Zjc4YWMzZmY3Mzk3ODBiYWM1YTc3ZTgzZTmtjlmE: 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.257 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.823 nvme0n1 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzU3ZWNiNjJiNzcxZTdhZDcyNjk2YWQ3YWE5Mzk0NjY3NmRlN2I4YjdlYzMyZDY3lhPEPA==: 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWY4Nzk5YWE3ODRiYjZlNDA5NjI4Y2ZiYjI4MzVlNDi4YeYI: 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:35.823 11:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.390 nvme0n1 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRlNTI3ZTQ5MDAzN2NmYmZjODVjNjhlYTc4MTkzYTVmYzBjNmQ0Y2M4YzQ1MzhmY2Y0MjU0ZDg0Zjk1NmRjMwmNqX4=: 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.390 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.957 nvme0n1 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE2ZGMxN2NjYjVlYjY0MjE0MmI3NzM4NzE1NTY0Y2MxNmM1ODNhZTQ4YTkxZjQwu1F2CA==: 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2RhMDE4Yzk5NWI0NWY0NTBmODUzMDFlZTBjODAyNDc2NmNkZTMzZmQ3ZmNkMDY1LLxlAw==: 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:36.957 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:36.958 2024/07/22 11:01:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:06:36.958 request: 01:06:36.958 { 01:06:36.958 "method": "bdev_nvme_attach_controller", 01:06:36.958 "params": { 01:06:36.958 "name": "nvme0", 01:06:36.958 "trtype": "tcp", 01:06:36.958 "traddr": "10.0.0.1", 01:06:36.958 "adrfam": "ipv4", 01:06:36.958 "trsvcid": "4420", 01:06:36.958 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:06:36.958 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:06:36.958 "prchk_reftag": false, 01:06:36.958 "prchk_guard": false, 01:06:36.958 "hdgst": false, 01:06:36.958 "ddgst": false 01:06:36.958 } 01:06:36.958 } 01:06:36.958 Got JSON-RPC error response 01:06:36.958 GoRPCClient: error on JSON-RPC call 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:36.958 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:37.217 2024/07/22 11:01:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:06:37.217 request: 01:06:37.217 { 01:06:37.217 "method": "bdev_nvme_attach_controller", 01:06:37.217 "params": { 01:06:37.217 "name": "nvme0", 01:06:37.217 "trtype": "tcp", 01:06:37.217 "traddr": "10.0.0.1", 01:06:37.217 "adrfam": "ipv4", 01:06:37.217 "trsvcid": "4420", 01:06:37.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:06:37.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:06:37.217 "prchk_reftag": false, 01:06:37.217 "prchk_guard": false, 01:06:37.217 "hdgst": false, 01:06:37.217 "ddgst": false, 01:06:37.217 "dhchap_key": "key2" 01:06:37.217 } 01:06:37.217 } 01:06:37.217 Got JSON-RPC error response 01:06:37.217 GoRPCClient: error on JSON-RPC call 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:37.217 11:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:37.217 2024/07/22 11:01:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:06:37.217 request: 01:06:37.217 { 01:06:37.217 "method": "bdev_nvme_attach_controller", 01:06:37.217 "params": { 01:06:37.217 "name": "nvme0", 01:06:37.217 "trtype": "tcp", 01:06:37.217 "traddr": "10.0.0.1", 01:06:37.217 "adrfam": "ipv4", 01:06:37.217 "trsvcid": "4420", 01:06:37.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:06:37.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:06:37.217 "prchk_reftag": false, 01:06:37.217 "prchk_guard": false, 01:06:37.217 "hdgst": false, 01:06:37.217 "ddgst": false, 01:06:37.217 "dhchap_key": "key1", 01:06:37.217 "dhchap_ctrlr_key": "ckey2" 01:06:37.217 } 01:06:37.217 } 01:06:37.217 Got JSON-RPC error response 01:06:37.217 GoRPCClient: error on JSON-RPC call 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:37.217 rmmod nvme_tcp 01:06:37.217 rmmod nvme_fabrics 01:06:37.217 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 110578 ']' 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 110578 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 110578 ']' 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 110578 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110578 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:37.476 killing process with pid 110578 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110578' 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 110578 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 110578 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:06:37.476 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:06:37.735 11:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:06:38.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:38.671 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:06:38.671 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:06:38.671 11:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ijL /tmp/spdk.key-null.45z /tmp/spdk.key-sha256.7wf /tmp/spdk.key-sha384.Put /tmp/spdk.key-sha512.eN1 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:06:38.671 11:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:06:39.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:39.237 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:06:39.237 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:06:39.237 01:06:39.237 real 0m32.809s 01:06:39.237 user 0m30.138s 01:06:39.237 sys 0m4.972s 01:06:39.237 11:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:39.237 11:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:39.237 ************************************ 01:06:39.237 END TEST nvmf_auth_host 01:06:39.237 ************************************ 01:06:39.495 11:01:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:39.495 11:01:47 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 01:06:39.495 11:01:47 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:06:39.495 11:01:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:39.495 11:01:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:39.495 11:01:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:39.495 ************************************ 01:06:39.495 START TEST nvmf_digest 01:06:39.495 ************************************ 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:06:39.495 * Looking for test storage... 01:06:39.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:39.495 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:39.753 Cannot find device "nvmf_tgt_br" 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:39.753 Cannot find device "nvmf_tgt_br2" 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:39.753 Cannot find device "nvmf_tgt_br" 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:39.753 Cannot find device "nvmf_tgt_br2" 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:39.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:39.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:39.753 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:40.011 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:40.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:40.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 01:06:40.011 01:06:40.011 --- 10.0.0.2 ping statistics --- 01:06:40.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:40.012 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:40.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:40.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 01:06:40.012 01:06:40.012 --- 10.0.0.3 ping statistics --- 01:06:40.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:40.012 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:40.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:40.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:06:40.012 01:06:40.012 --- 10.0.0.1 ping statistics --- 01:06:40.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:40.012 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:06:40.012 ************************************ 01:06:40.012 START TEST nvmf_digest_clean 01:06:40.012 ************************************ 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=112151 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 112151 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112151 ']' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:40.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:40.012 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:40.012 [2024-07-22 11:01:47.908402] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:40.012 [2024-07-22 11:01:47.908473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:40.269 [2024-07-22 11:01:48.027194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:40.269 [2024-07-22 11:01:48.051811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:40.269 [2024-07-22 11:01:48.091502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:40.269 [2024-07-22 11:01:48.091542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:40.269 [2024-07-22 11:01:48.091566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:40.269 [2024-07-22 11:01:48.091574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:40.269 [2024-07-22 11:01:48.091581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:40.269 [2024-07-22 11:01:48.091605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:40.833 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:40.833 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:06:40.833 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:40.833 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:40.833 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:41.091 null0 01:06:41.091 [2024-07-22 11:01:48.883737] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:41.091 [2024-07-22 11:01:48.907802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112200 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112200 /var/tmp/bperf.sock 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112200 ']' 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:41.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:41.091 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:41.091 [2024-07-22 11:01:48.966138] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:41.091 [2024-07-22 11:01:48.966196] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112200 ] 01:06:41.349 [2024-07-22 11:01:49.083805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:41.349 [2024-07-22 11:01:49.107526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:41.349 [2024-07-22 11:01:49.149497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:41.915 11:01:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:41.915 11:01:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:06:41.915 11:01:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:41.915 11:01:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:41.915 11:01:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:42.173 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:42.173 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:42.431 nvme0n1 01:06:42.431 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:42.431 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:42.731 Running I/O for 2 seconds... 01:06:44.643 01:06:44.643 Latency(us) 01:06:44.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:44.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:06:44.643 nvme0n1 : 2.00 25261.02 98.68 0.00 0.00 5062.11 2763.57 13580.95 01:06:44.643 =================================================================================================================== 01:06:44.643 Total : 25261.02 98.68 0.00 0.00 5062.11 2763.57 13580.95 01:06:44.643 0 01:06:44.643 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:44.644 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:44.644 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:44.644 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:44.644 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:44.644 | select(.opcode=="crc32c") 01:06:44.644 | "\(.module_name) \(.executed)"' 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112200 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112200 ']' 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112200 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112200 01:06:44.902 killing process with pid 112200 01:06:44.902 Received shutdown signal, test time was about 2.000000 seconds 01:06:44.902 01:06:44.902 Latency(us) 01:06:44.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:44.902 =================================================================================================================== 01:06:44.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112200' 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112200 01:06:44.902 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112200 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112287 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112287 /var/tmp/bperf.sock 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112287 ']' 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:45.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:45.160 11:01:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:45.160 [2024-07-22 11:01:52.922863] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:45.160 [2024-07-22 11:01:52.923083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:06:45.161 Zero copy mechanism will not be used. 01:06:45.161 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112287 ] 01:06:45.161 [2024-07-22 11:01:53.041078] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:45.161 [2024-07-22 11:01:53.065035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:45.418 [2024-07-22 11:01:53.106102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:45.986 11:01:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:45.986 11:01:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:06:45.986 11:01:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:45.986 11:01:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:45.986 11:01:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:46.244 11:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:46.244 11:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:46.503 nvme0n1 01:06:46.503 11:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:46.503 11:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:46.503 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:46.503 Zero copy mechanism will not be used. 01:06:46.503 Running I/O for 2 seconds... 01:06:49.033 01:06:49.033 Latency(us) 01:06:49.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:49.033 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:06:49.033 nvme0n1 : 2.00 10171.05 1271.38 0.00 0.00 1570.25 503.36 9843.56 01:06:49.033 =================================================================================================================== 01:06:49.033 Total : 10171.05 1271.38 0.00 0.00 1570.25 503.36 9843.56 01:06:49.033 0 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:49.033 | select(.opcode=="crc32c") 01:06:49.033 | "\(.module_name) \(.executed)"' 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112287 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112287 ']' 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112287 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112287 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:49.033 killing process with pid 112287 01:06:49.033 Received shutdown signal, test time was about 2.000000 seconds 01:06:49.033 01:06:49.033 Latency(us) 01:06:49.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:49.033 =================================================================================================================== 01:06:49.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112287' 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112287 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112287 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:49.033 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112372 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112372 /var/tmp/bperf.sock 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112372 ']' 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:49.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:49.034 11:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:49.034 [2024-07-22 11:01:56.865153] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:49.034 [2024-07-22 11:01:56.865235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112372 ] 01:06:49.292 [2024-07-22 11:01:56.983621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:49.292 [2024-07-22 11:01:56.996737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:49.292 [2024-07-22 11:01:57.042246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:49.858 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:49.858 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:06:49.858 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:49.858 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:49.858 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:50.131 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:50.131 11:01:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:50.401 nvme0n1 01:06:50.401 11:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:50.401 11:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:50.658 Running I/O for 2 seconds... 01:06:52.560 01:06:52.560 Latency(us) 01:06:52.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:52.560 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:52.560 nvme0n1 : 2.00 29665.20 115.88 0.00 0.00 4310.17 2263.49 8422.30 01:06:52.560 =================================================================================================================== 01:06:52.560 Total : 29665.20 115.88 0.00 0.00 4310.17 2263.49 8422.30 01:06:52.560 0 01:06:52.560 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:52.560 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:52.560 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:52.560 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:52.560 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:52.560 | select(.opcode=="crc32c") 01:06:52.560 | "\(.module_name) \(.executed)"' 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112372 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112372 ']' 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112372 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112372 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:52.819 killing process with pid 112372 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112372' 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112372 01:06:52.819 Received shutdown signal, test time was about 2.000000 seconds 01:06:52.819 01:06:52.819 Latency(us) 01:06:52.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:52.819 =================================================================================================================== 01:06:52.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:52.819 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112372 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112461 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112461 /var/tmp/bperf.sock 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112461 ']' 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:53.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:53.079 11:02:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:53.079 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:53.079 Zero copy mechanism will not be used. 01:06:53.079 [2024-07-22 11:02:00.831606] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:53.079 [2024-07-22 11:02:00.831666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112461 ] 01:06:53.079 [2024-07-22 11:02:00.949566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:53.079 [2024-07-22 11:02:00.972621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:53.338 [2024-07-22 11:02:01.014409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:53.906 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:53.906 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:06:53.906 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:53.906 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:53.906 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:54.165 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:54.165 11:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:54.424 nvme0n1 01:06:54.424 11:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:54.424 11:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:54.424 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:54.424 Zero copy mechanism will not be used. 01:06:54.424 Running I/O for 2 seconds... 01:06:56.954 01:06:56.954 Latency(us) 01:06:56.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:56.954 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:06:56.954 nvme0n1 : 2.00 10031.32 1253.92 0.00 0.00 1591.84 1329.14 3500.52 01:06:56.954 =================================================================================================================== 01:06:56.954 Total : 10031.32 1253.92 0.00 0.00 1591.84 1329.14 3500.52 01:06:56.954 0 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:56.954 | select(.opcode=="crc32c") 01:06:56.954 | "\(.module_name) \(.executed)"' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112461 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112461 ']' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112461 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112461 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:56.954 killing process with pid 112461 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112461' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112461 01:06:56.954 Received shutdown signal, test time was about 2.000000 seconds 01:06:56.954 01:06:56.954 Latency(us) 01:06:56.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:56.954 =================================================================================================================== 01:06:56.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112461 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 112151 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112151 ']' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112151 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112151 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:56.954 killing process with pid 112151 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112151' 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112151 01:06:56.954 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112151 01:06:57.214 01:06:57.214 real 0m17.099s 01:06:57.214 user 0m31.169s 01:06:57.214 sys 0m4.970s 01:06:57.214 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:57.214 11:02:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:57.214 ************************************ 01:06:57.214 END TEST nvmf_digest_clean 01:06:57.214 ************************************ 01:06:57.214 11:02:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 01:06:57.214 11:02:04 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:06:57.214 11:02:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:06:57.214 ************************************ 01:06:57.214 START TEST nvmf_digest_error 01:06:57.214 ************************************ 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=112570 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 112570 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112570 ']' 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:57.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:57.214 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:57.214 [2024-07-22 11:02:05.077395] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:57.214 [2024-07-22 11:02:05.077465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:57.473 [2024-07-22 11:02:05.195989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:57.473 [2024-07-22 11:02:05.220003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:57.473 [2024-07-22 11:02:05.261035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:57.473 [2024-07-22 11:02:05.261091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:57.473 [2024-07-22 11:02:05.261100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:57.473 [2024-07-22 11:02:05.261108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:57.473 [2024-07-22 11:02:05.261115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:57.473 [2024-07-22 11:02:05.261141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:58.040 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:58.040 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:06:58.040 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:58.040 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:58.040 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:58.298 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:58.298 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:06:58.298 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:58.298 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:58.298 [2024-07-22 11:02:05.980423] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:06:58.298 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:58.298 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:06:58.299 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:06:58.299 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:58.299 11:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:58.299 null0 01:06:58.299 [2024-07-22 11:02:06.068406] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:58.299 [2024-07-22 11:02:06.092480] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112614 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112614 /var/tmp/bperf.sock 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112614 ']' 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:58.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:58.299 11:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:58.299 [2024-07-22 11:02:06.150427] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:06:58.299 [2024-07-22 11:02:06.150493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112614 ] 01:06:58.557 [2024-07-22 11:02:06.268190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:06:58.557 [2024-07-22 11:02:06.291711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:58.557 [2024-07-22 11:02:06.334121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:59.175 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:59.175 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:06:59.175 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:59.175 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:59.433 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:06:59.433 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.433 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:59.433 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.433 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:59.433 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:59.692 nvme0n1 01:06:59.692 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:06:59.692 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.692 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:59.692 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.692 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:06:59.692 11:02:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:59.951 Running I/O for 2 seconds... 01:06:59.951 [2024-07-22 11:02:07.709831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.709879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.709891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.718726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.718764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.718776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.729223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.729261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.729282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.740587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.740625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.740636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.750796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.750833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.750844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.761229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.761292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.761304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.771207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.771244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.771255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.781570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.781605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.781617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.791657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.791694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.791705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.801919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.801955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.801966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.812118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.812154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.812166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.823652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.823687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.823714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.832860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.832895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.832922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.843994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.844031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.844057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.854712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.854747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.854758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.865675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.865710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.865721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:59.951 [2024-07-22 11:02:07.875133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:06:59.951 [2024-07-22 11:02:07.875169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:59.951 [2024-07-22 11:02:07.875180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.886740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.886778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.886789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.895438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.895473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.895500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.905574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.905612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.905639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.915106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.915141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.915168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.925579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.925616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.925628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.934617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.934652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.934663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.945831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.945867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.945878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.955448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.955483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.955509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.966735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.966775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.966786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.976460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.976496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.976522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.987768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.987807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.987834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:07.997879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:07.997914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:07.997925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.008244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.008291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.008303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.017784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.017820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.017831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.028796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.028832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.028843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.038327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.038362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.038373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.049399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.049435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.049445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.061058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.061096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.061107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.073024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.073062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.073074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.084944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.084980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.085006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.096906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.096940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.096967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.108880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.108916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.108928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.118685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.118721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.118732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.129952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.129987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.129999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.211 [2024-07-22 11:02:08.139567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.211 [2024-07-22 11:02:08.139602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.211 [2024-07-22 11:02:08.139613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.151094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.151129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.151141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.161501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.161536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.161547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.171959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.171996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.172007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.181854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.181893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.181905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.191709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.191756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.191768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.200573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.200616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.200627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.213024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.213074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.213086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.221835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.221881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.221893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.233315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.233363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.233390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.243862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.243911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.243922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.254751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.254798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.254810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.263396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.263441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.263452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.275754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.275806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.275819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.285211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.285262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.285284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.295405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.295448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.295461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.306344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.306391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.306403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.316020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.316061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.316073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.326884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.326937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.326965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.337452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.337496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.471 [2024-07-22 11:02:08.337523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.471 [2024-07-22 11:02:08.346943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.471 [2024-07-22 11:02:08.346989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.472 [2024-07-22 11:02:08.347000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.472 [2024-07-22 11:02:08.358463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.472 [2024-07-22 11:02:08.358509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.472 [2024-07-22 11:02:08.358520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.472 [2024-07-22 11:02:08.368779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.472 [2024-07-22 11:02:08.368827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.472 [2024-07-22 11:02:08.368855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.472 [2024-07-22 11:02:08.377469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.472 [2024-07-22 11:02:08.377505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.472 [2024-07-22 11:02:08.377516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.472 [2024-07-22 11:02:08.386880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.472 [2024-07-22 11:02:08.386922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.472 [2024-07-22 11:02:08.386951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.472 [2024-07-22 11:02:08.398584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.472 [2024-07-22 11:02:08.398628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.472 [2024-07-22 11:02:08.398640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.409383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.409429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.409456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.420697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.420739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.420751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.428440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.428475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.428486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.439730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.439769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.439781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.449576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.449616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.449627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.459494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.459530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.459541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.470323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.470362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.470389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.480623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.480662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.480690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.491614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.491650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.491662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.500398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.500433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.500444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.510619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.510659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.510670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.520923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.520960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.520971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.531242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.531288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.531299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.541587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.541622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.541632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.553054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.553092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.553103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.563280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.563315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.563327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.572373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.572409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.572420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.582431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.582466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.582477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.593548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.593584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.593596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.605456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.605492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.605503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.615879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.615917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.615929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.625940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.625976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.625987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.635379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.635414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.635425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.645484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.645519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.645530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.732 [2024-07-22 11:02:08.656010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.732 [2024-07-22 11:02:08.656047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.732 [2024-07-22 11:02:08.656058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.667029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.667065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.667077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.678472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.678509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.678519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.689697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.689738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.689756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.700126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.700163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.700174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.708964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.709001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.709012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.719234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.719280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.719291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.730757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.730794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.730805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.740956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.740993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.741004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.751631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.751667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.751678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.762362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.762399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.762411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.772035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.772073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.772083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.781855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.781892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.781902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.793134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.793174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.793185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.803892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.803933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.803944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.993 [2024-07-22 11:02:08.812874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.993 [2024-07-22 11:02:08.812912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.993 [2024-07-22 11:02:08.812923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.824424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.824461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.824472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.835749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.835785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.835797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.846040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.846077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.846088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.856892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.856928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.856939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.866801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.866840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.866851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.877003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.877041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.877053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.885477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.885512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.885523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.896804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.896839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.896866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.908101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.908137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.908149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:00.994 [2024-07-22 11:02:08.919508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:00.994 [2024-07-22 11:02:08.919543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:00.994 [2024-07-22 11:02:08.919554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.929533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.929568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.929579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.939901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.939937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.939949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.948656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.948690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.948717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.959604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.959639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.959666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.970529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.970564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.970575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.981390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.981425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.981436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:08.991960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:08.991996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:08.992023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.001381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.001415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.001426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.011512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.011546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.011556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.022150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.022191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.022202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.031750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.031787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.031815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.043646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.043699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.043710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.055197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.055240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.055269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.064631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.064679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.064690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.075460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.075506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.075518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.086664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.086714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.086725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.098097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.098142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.098153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.107151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.107198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.254 [2024-07-22 11:02:09.107226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.254 [2024-07-22 11:02:09.117937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.254 [2024-07-22 11:02:09.117982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.117994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.255 [2024-07-22 11:02:09.128836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.255 [2024-07-22 11:02:09.128880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.128908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.255 [2024-07-22 11:02:09.140142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.255 [2024-07-22 11:02:09.140183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.140211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.255 [2024-07-22 11:02:09.149803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.255 [2024-07-22 11:02:09.149844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.149855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.255 [2024-07-22 11:02:09.160514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.255 [2024-07-22 11:02:09.160556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.160585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.255 [2024-07-22 11:02:09.171021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.255 [2024-07-22 11:02:09.171062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.171074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.255 [2024-07-22 11:02:09.180245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.255 [2024-07-22 11:02:09.180294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.255 [2024-07-22 11:02:09.180305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.191068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.191106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.191118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.201824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.201865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.201876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.211625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.211665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.211677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.222971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.223013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.223025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.232182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.232220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.232231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.244126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.244164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.244175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.254062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.254098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.254109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.265094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.265130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.265142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.274786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.274821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.274833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.283707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.283742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.283753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.294491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.294526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.294537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.304102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.515 [2024-07-22 11:02:09.304138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.515 [2024-07-22 11:02:09.304150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.515 [2024-07-22 11:02:09.314095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.314132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.314142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.325500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.325535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.325546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.336391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.336426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.336437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.347112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.347147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.347158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.356060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.356097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.356107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.367393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.367428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.367439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.377432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.377468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.377479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.387739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.387774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.387800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.397599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.397634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.397662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.408040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.408075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.408101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.419090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.419125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.419152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.429658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.429693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.429719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.516 [2024-07-22 11:02:09.439486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.516 [2024-07-22 11:02:09.439536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.516 [2024-07-22 11:02:09.439547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.450296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.450331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.450342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.462448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.462483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.462494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.471525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.471560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.471571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.481186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.481222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.481233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.492037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.492074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.492086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.502362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.502398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.502408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.512864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.512900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.512911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.523667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.523702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.523713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.776 [2024-07-22 11:02:09.533169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.776 [2024-07-22 11:02:09.533204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.776 [2024-07-22 11:02:09.533215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.544257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.544301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.544312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.555423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.555458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.555469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.566262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.566306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.566317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.575454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.575489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.575500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.586992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.587029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.587041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.598132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.598167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.598178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.607549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.607584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.607611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.619588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.619623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.619634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.629493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.629528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.629539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.640364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.640413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.649816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.649851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.649862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.660320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.660356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.660368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.671102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.671140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.671151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.681004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.681043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.681054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:01.777 [2024-07-22 11:02:09.691024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2329200) 01:07:01.777 [2024-07-22 11:02:09.691061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:01.777 [2024-07-22 11:02:09.691072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:02.036 01:07:02.036 Latency(us) 01:07:02.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:02.036 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:07:02.036 nvme0n1 : 2.05 23860.84 93.21 0.00 0.00 5289.61 2697.77 48007.09 01:07:02.036 =================================================================================================================== 01:07:02.036 Total : 23860.84 93.21 0.00 0.00 5289.61 2697.77 48007.09 01:07:02.036 0 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:07:02.036 | .driver_specific 01:07:02.036 | .nvme_error 01:07:02.036 | .status_code 01:07:02.036 | .command_transient_transport_error' 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112614 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112614 ']' 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112614 01:07:02.036 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:07:02.296 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:02.296 11:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112614 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:02.296 killing process with pid 112614 01:07:02.296 Received shutdown signal, test time was about 2.000000 seconds 01:07:02.296 01:07:02.296 Latency(us) 01:07:02.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:02.296 =================================================================================================================== 01:07:02.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112614' 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112614 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112614 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112699 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112699 /var/tmp/bperf.sock 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112699 ']' 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:02.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:02.296 11:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:02.555 I/O size of 131072 is greater than zero copy threshold (65536). 01:07:02.556 Zero copy mechanism will not be used. 01:07:02.556 [2024-07-22 11:02:10.228938] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:07:02.556 [2024-07-22 11:02:10.229021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112699 ] 01:07:02.556 [2024-07-22 11:02:10.347061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:07:02.556 [2024-07-22 11:02:10.370676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:02.556 [2024-07-22 11:02:10.413165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:07:03.493 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:07:03.752 nvme0n1 01:07:03.752 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:07:03.752 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:03.752 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:03.752 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:03.752 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:07:03.752 11:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:07:03.752 I/O size of 131072 is greater than zero copy threshold (65536). 01:07:03.752 Zero copy mechanism will not be used. 01:07:03.752 Running I/O for 2 seconds... 01:07:03.752 [2024-07-22 11:02:11.658306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.658361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.658375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.662468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.662517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.662529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.666225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.666282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.666294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.670180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.670225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.670236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.672363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.672395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.672406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.676404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.676446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.676457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.680474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.680516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.680526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:03.752 [2024-07-22 11:02:11.682823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:03.752 [2024-07-22 11:02:11.682861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:03.752 [2024-07-22 11:02:11.682872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.685896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.685930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.685940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.689354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.689388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.689399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.692969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.693007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.693018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.696817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.696855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.696882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.699158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.699195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.699206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.702370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.702406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.702418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.706064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.706104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.706115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.709978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.710019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.710031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.712585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.712619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.712645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.715914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.715954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.715965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.719095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.719135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.719146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.722426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.722466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.722477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.725871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.725905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.725916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.728638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.728672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.728699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.731432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.731471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.731482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.734631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.734669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.734680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.738076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.738116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.738127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.741093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.741128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.013 [2024-07-22 11:02:11.741138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.013 [2024-07-22 11:02:11.744404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.013 [2024-07-22 11:02:11.744444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.744455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.746874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.746913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.746940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.750385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.750423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.750434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.753705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.753747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.753759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.757538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.757572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.757599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.759632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.759666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.759677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.763338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.763376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.763387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.766333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.766370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.766381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.768713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.768747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.768757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.772100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.772140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.775169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.775210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.778217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.778255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.778277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.781163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.781197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.781208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.784532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.784567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.784578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.787820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.787859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.787869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.791123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.791161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.791172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.794213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.794261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.797444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.797478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.797488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.800231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.800277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.800288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.803624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.803663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.806994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.807033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.807045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.809738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.809781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.812815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.812851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.014 [2024-07-22 11:02:11.812861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.014 [2024-07-22 11:02:11.815576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.014 [2024-07-22 11:02:11.815615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.815626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.819318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.819355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.819367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.823206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.823246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.823257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.825513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.825544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.825555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.828666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.828700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.828710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.832552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.832590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.832601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.836374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.836413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.836425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.839962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.839999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.840025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.842328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.842363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.842374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.845833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.845866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.845877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.849373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.849406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.849433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.852916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.852953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.852964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.855313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.855349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.855359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.859132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.859172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.859182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.862626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.862665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.862675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.865250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.865291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.865302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.868435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.868472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.868482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.871972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.872011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.872038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.875288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.875323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.875334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.877679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.877712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.877722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.881249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.881293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.881304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.884763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.015 [2024-07-22 11:02:11.884801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.015 [2024-07-22 11:02:11.884812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.015 [2024-07-22 11:02:11.886843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.886879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.886889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.890649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.890688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.890698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.893137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.893170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.893181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.896325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.896362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.896373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.899694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.899732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.899743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.902762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.902811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.902821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.905262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.905307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.905318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.908601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.908638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.908649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.911812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.911850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.911861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.915421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.915459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.915469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.919232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.919282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.922922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.922960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.922971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.925220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.925252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.925279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.928874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.928913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.928923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.932211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.932247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.932258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.934477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.934516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.934527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.937758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.937791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.937802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.940599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.940632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.940643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.016 [2024-07-22 11:02:11.943612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.016 [2024-07-22 11:02:11.943649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.016 [2024-07-22 11:02:11.943660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.277 [2024-07-22 11:02:11.946892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.277 [2024-07-22 11:02:11.946932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.277 [2024-07-22 11:02:11.946943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.277 [2024-07-22 11:02:11.950187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.277 [2024-07-22 11:02:11.950226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.277 [2024-07-22 11:02:11.950236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.277 [2024-07-22 11:02:11.952841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.277 [2024-07-22 11:02:11.952875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.277 [2024-07-22 11:02:11.952885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.277 [2024-07-22 11:02:11.956024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.277 [2024-07-22 11:02:11.956063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.277 [2024-07-22 11:02:11.956074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.277 [2024-07-22 11:02:11.959830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.277 [2024-07-22 11:02:11.959869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.277 [2024-07-22 11:02:11.959880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.277 [2024-07-22 11:02:11.962455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.277 [2024-07-22 11:02:11.962489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.277 [2024-07-22 11:02:11.962500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.965452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.965485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.965496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.968948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.968984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.968996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.972518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.972556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.972567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.975884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.975923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.975934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.978983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.979019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.979030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.981403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.981434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.981445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.985163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.985200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.985211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.988940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.988977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.988988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.991492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.991526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.991536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.994350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.994385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.994395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:11.997719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:11.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:11.997772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.000013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.000046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.000057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.003146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.003184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.003195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.006281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.006316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.006327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.008819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.008853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.008864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.012479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.012517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.015698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.015736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.015747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.018517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.018557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.018568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.021777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.021810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.021820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.024919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.024952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.024963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.027722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.027761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.027771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.031161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.031200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.031210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.034315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.034348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.034358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.036911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.036944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.036954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.040339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.040375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.040385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.043750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.043788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.043799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.046874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.046911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.046922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.049299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.049327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.049339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.052422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.052458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.052468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.055572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.055611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.055622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.058233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.058290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.061349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.061382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.061392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.065168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.065204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.067661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.067696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.067706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.070960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.071000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.071011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.074549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.074586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.077167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.077201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.077212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.080487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.080522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.080532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.084000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.084038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.084049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.087809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.087848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.087860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.090172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.090208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.090218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.093326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.093358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.093368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.096932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.096967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.096979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.100369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.100407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.100418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.103441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.103478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.103488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.106028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.106064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.106075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.109314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.109346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.109357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.111969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.112008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.112019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.115103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.115142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.278 [2024-07-22 11:02:12.115153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.278 [2024-07-22 11:02:12.118708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.278 [2024-07-22 11:02:12.118746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.118758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.121989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.122027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.122037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.124291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.124320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.124330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.127223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.127260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.127282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.130713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.130750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.130762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.134017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.134055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.134066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.136648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.136682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.136693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.139831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.139868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.139879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.143062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.143100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.143111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.145643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.145676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.145687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.148838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.148873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.148883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.151800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.151838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.151865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.155008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.155047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.155058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.158442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.158481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.158492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.160861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.160894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.160905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.164316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.164351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.164362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.168053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.168093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.168103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.171619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.171659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.171670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.174305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.174339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.174349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.177662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.177696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.177707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.180821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.180855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.180882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.183260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.183307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.183318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.186836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.186874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.186895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.190685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.190724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.190751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.194452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.194490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.194501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.196980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.197011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.197021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.200334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.200369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.200379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.203833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.203870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.203896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.279 [2024-07-22 11:02:12.207247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.279 [2024-07-22 11:02:12.207295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.279 [2024-07-22 11:02:12.207306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.540 [2024-07-22 11:02:12.209496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.540 [2024-07-22 11:02:12.209528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.540 [2024-07-22 11:02:12.209538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.540 [2024-07-22 11:02:12.212955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.540 [2024-07-22 11:02:12.212988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.540 [2024-07-22 11:02:12.213013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.540 [2024-07-22 11:02:12.215533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.540 [2024-07-22 11:02:12.215568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.540 [2024-07-22 11:02:12.215594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.540 [2024-07-22 11:02:12.218872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.540 [2024-07-22 11:02:12.218910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.540 [2024-07-22 11:02:12.218921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.540 [2024-07-22 11:02:12.222674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.540 [2024-07-22 11:02:12.222713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.540 [2024-07-22 11:02:12.222724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.540 [2024-07-22 11:02:12.225920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.540 [2024-07-22 11:02:12.225954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.225965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.228582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.228614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.228639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.231629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.231667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.231693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.235066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.235104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.235132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.237614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.237646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.237671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.240223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.240256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.243484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.243521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.243547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.247362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.247398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.247409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.249987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.250021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.250032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.252976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.253009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.253034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.256445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.256480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.256506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.260014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.260051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.260076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.263288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.263324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.263334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.265424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.265452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.265463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.269216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.269251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.269261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.272770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.272805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.272831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.275132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.275170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.275181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.278687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.278726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.282420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.282459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.282469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.285970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.286006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.286017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.288122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.288153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.288164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.291398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.291435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.291446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.294783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.294817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.294828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.297960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.297997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.298007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.300601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.300633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.300660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.303714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.303752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.303779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.306912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.306951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.306961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.310330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.310368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.310378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.312701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.541 [2024-07-22 11:02:12.312733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.541 [2024-07-22 11:02:12.312744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.541 [2024-07-22 11:02:12.315943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.315982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.316008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.319459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.319496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.319524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.323091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.323133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.323144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.326579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.326621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.326632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.328771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.328804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.328831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.332228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.332276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.332287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.334620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.334658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.334669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.338323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.338360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.338371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.341129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.341160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.341187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.344541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.344579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.344605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.348259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.348306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.348317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.351067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.351104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.351131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.353551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.353585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.353596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.357174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.357210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.357221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.359827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.359865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.359876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.362853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.362891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.362902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.366060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.366098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.366109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.369004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.369037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.369048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.372261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.372309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.372320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.375928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.375967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.375978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.379032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.379070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.379080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.381394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.381425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.381436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.384761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.384795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.384806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.388001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.388038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.390902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.390940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.390951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.394032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.394072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.394083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.397293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.397326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.397336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.399633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.399672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.403049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.403088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.542 [2024-07-22 11:02:12.403099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.542 [2024-07-22 11:02:12.406529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.542 [2024-07-22 11:02:12.406568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.406579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.409916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.409954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.409964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.413002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.413034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.413044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.415571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.415609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.415620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.418717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.418755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.418766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.421795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.421829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.421840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.424666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.424699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.424710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.427820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.427858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.427869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.431237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.431289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.431300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.433460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.433493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.433504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.436768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.436804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.436814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.440200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.440239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.440250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.443737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.443775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.443786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.446237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.446285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.446296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.448973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.449007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.449017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.452293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.452329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.452339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.454997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.455036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.455047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.458138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.458176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.458186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.461496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.461530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.461541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.463930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.463966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.463976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.467233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.467284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.467295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.543 [2024-07-22 11:02:12.470316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.543 [2024-07-22 11:02:12.470353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.543 [2024-07-22 11:02:12.470363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.473363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.473397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.473408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.476380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.476414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.476425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.478874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.478910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.478921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.482133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.482171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.482181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.485617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.485652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.485662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.488137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.488170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.488181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.491335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.491372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.491382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.494918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.494958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.494969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.497574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.497605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.497616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.500577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.500611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.500621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.504145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.504184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.504195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.507837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.507876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.507887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.510667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.510706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.510717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.513076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.513109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.513119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.516365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.516402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.516413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.519936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.519976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.519987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.523074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.523112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.523123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.525505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.525538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.525548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.528810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.528845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.528856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.532659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.532698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.532709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.815 [2024-07-22 11:02:12.536101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.815 [2024-07-22 11:02:12.536140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.815 [2024-07-22 11:02:12.536150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.538380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.538418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.538428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.541945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.541982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.541993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.545569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.549205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.549239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.549251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.551807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.551844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.551855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.554988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.555027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.555038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.558433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.558473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.558484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.562086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.562125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.562136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.564735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.564766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.564776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.567874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.567912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.567922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.571398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.571437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.573841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.573873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.573884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.576879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.576914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.576924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.580584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.580623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.580634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.582931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.582970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.582980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.586391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.586430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.586441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.589321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.589352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.589363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.592228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.592275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.592287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.595225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.595275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.595287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.597880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.597915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.597926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.600315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.600347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.603307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.603344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.603354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.606877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.606916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.606927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.609457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.609490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.609500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.612763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.612798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.612808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.615884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.615922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.615933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.619377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.619415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.619426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.622863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.622903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.622913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.816 [2024-07-22 11:02:12.624899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.816 [2024-07-22 11:02:12.624930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.816 [2024-07-22 11:02:12.624941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.628673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.628712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.628723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.632452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.632492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.632503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.634747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.634785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.634796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.638080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.638120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.638131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.641567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.641601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.641612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.645282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.645314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.645325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.647820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.647857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.647868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.650909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.650948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.650959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.654614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.654653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.654664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.658063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.658102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.658113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.660724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.660757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.660769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.663979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.664017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.664027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.666578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.666617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.666628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.670058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.670097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.670108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.672545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.672576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.672586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.675699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.675738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.675748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.678460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.678497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.678508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.681595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.681629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.681640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.685088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.685122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.685133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.687386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.687422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.687432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.690536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.690574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.690585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.693390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.693422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.693433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.696353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.696386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.696397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.699278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.699312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.699323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.701844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.701878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.701888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.705179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.705213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.705224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.707623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.707660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.707670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.710720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.710760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.710771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.817 [2024-07-22 11:02:12.713899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.817 [2024-07-22 11:02:12.713935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.817 [2024-07-22 11:02:12.713945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.716207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.716240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.716251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.719888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.719927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.722451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.722486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.722496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.725560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.725595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.725606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.729035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.729070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.729080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.732312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.732349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.732359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.734774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.734810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:04.818 [2024-07-22 11:02:12.738024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:04.818 [2024-07-22 11:02:12.738062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:04.818 [2024-07-22 11:02:12.738073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.082 [2024-07-22 11:02:12.741674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.082 [2024-07-22 11:02:12.741708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.082 [2024-07-22 11:02:12.741719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.082 [2024-07-22 11:02:12.745291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.082 [2024-07-22 11:02:12.745324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.082 [2024-07-22 11:02:12.745334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.082 [2024-07-22 11:02:12.747632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.082 [2024-07-22 11:02:12.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.082 [2024-07-22 11:02:12.747676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.082 [2024-07-22 11:02:12.750785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.750823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.750834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.754424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.754462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.754473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.758298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.758345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.761964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.762001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.762011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.764207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.764241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.764251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.767802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.767840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.767852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.771646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.771683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.771694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.775084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.775123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.775134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.777702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.777733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.777751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.780777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.780812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.780822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.784537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.784576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.784586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.788298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.788335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.788345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.790954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.790991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.791001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.794025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.794058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.794068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.797050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.797085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.797096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.800106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.800144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.800154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.802662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.802700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.802710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.805522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.805555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.805566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.808283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.808315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.808326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.812167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.812205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.812216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.815604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.815643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.815654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.817932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.817967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.817978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.821409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.821443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.821454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.824881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.824917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.824928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.828518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.828555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.828566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.831172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.831209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.831220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.834485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.834524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.834535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.838095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.838134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.838145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.841689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.841723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.841733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.843708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.843740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.843751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.847328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.847364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.847375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.850019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.850061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.850072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.853125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.853159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.853170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.856798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.856837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.856848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.859713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.859752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.859762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.862259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.862310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.862320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.865903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.865941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.865952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.868359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.868391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.868402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.871232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.871279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.871291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.873959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.873995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.874006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.876814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.876848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.876859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.879354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.879390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.879401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.882767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.882806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.882817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.885789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.885821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.885832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.888670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.888703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.888714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.891682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.891720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.891731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.894449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.894487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.894498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.897506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.897540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.897552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.900624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.900660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.900670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.903455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.903493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.903504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.906459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.906497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.906508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.909545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.909579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.909590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.912846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.912893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.915339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.915371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.915382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.918565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.918603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.918614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.921434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.921467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.921478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.924463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.924501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.924512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.927773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.927810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.927821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.930864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.930902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.930913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.933907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.933952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.937146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.937179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.937189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.939870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.939905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.939916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.942786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.942824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.942835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.946147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.946185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.946196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.949290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.949321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.949331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.952046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.952083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.952094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.083 [2024-07-22 11:02:12.955293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.083 [2024-07-22 11:02:12.955329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.083 [2024-07-22 11:02:12.955340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.958150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.958189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.958199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.960784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.960817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.960827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.963884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.963923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.963934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.967149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.967188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.967199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.969413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.969445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.969456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.972478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.972513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.972523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.976101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.976137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.976148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.978665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.978704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.978714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.981530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.981564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.981575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.984297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.984329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.984340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.986960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.986999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.987010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.989781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.989814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.989825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.992997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.993031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.993041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.996004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.996042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.996053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:12.998626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:12.998664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:12.998675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:13.002339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:13.002375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:13.002386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:13.005824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:13.005858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:13.005868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:13.009539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:13.009574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:13.009585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.084 [2024-07-22 11:02:13.012192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.084 [2024-07-22 11:02:13.012226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.084 [2024-07-22 11:02:13.012237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.015133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.015171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.015181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.018325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.018361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.018372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.021627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.021662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.021672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.024206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.024239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.024250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.027221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.027258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.027279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.030487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.030526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.030538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.033302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.033334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.033344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.036567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.036603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.036614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.039637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.039673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.039684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.042506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.042543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.042554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.044794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.044827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.044838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.047946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.047985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.047996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.051469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.051508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.051519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.054835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.054884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.057199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.057232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.057242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.060184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.060222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.060232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.063641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.063684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.063695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.065850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.065890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.065901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.069615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.069650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.069662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.072080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.072116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.072127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.075249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.075296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.075307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.077979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.078014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.078025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.080654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.080688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.343 [2024-07-22 11:02:13.083523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.343 [2024-07-22 11:02:13.083562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.343 [2024-07-22 11:02:13.083572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.086190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.086227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.086238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.089020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.089054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.089064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.092163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.092202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.092212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.095118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.095156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.095167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.098295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.098332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.098343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.101182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.101215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.101226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.104076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.104113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.104124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.106765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.106803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.106814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.110135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.110174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.110185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.112530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.112563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.112574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.115494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.115532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.115542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.118788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.118827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.118838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.121806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.121837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.121848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.124111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.124144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.124155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.127862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.127900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.127911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.131545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.131585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.134091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.134127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.134139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.137131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.137165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.137175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.140736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.140773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.140784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.143377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.143412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.143423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.146753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.146792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.146803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.150514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.150552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.150563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.154147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.154186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.154196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.156444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.156475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.156486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.159629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.159668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.159678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.163390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.163427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.163438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.166720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.166758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.166769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.168771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.168803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.344 [2024-07-22 11:02:13.168813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.344 [2024-07-22 11:02:13.172393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.344 [2024-07-22 11:02:13.172430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.172440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.176165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.176204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.176215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.179994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.180033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.180044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.182691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.182729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.182739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.186055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.186094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.189580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.189614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.189624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.192449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.192484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.192495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.195210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.195250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.195260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.198361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.198399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.198410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.201178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.201212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.201223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.203552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.203588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.203599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.206772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.206811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.206821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.210575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.210614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.210625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.213826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.213860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.216093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.216125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.216135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.219623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.219661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.219672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.223037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.223076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.223086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.225580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.225613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.225624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.228545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.228580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.228590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.231574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.231623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.234493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.234533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.234543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.237561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.237595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.237606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.240780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.240814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.240824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.243358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.243394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.243405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.246497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.246536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.246547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.249350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.249382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.249392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.251615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.251651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.251661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.255090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.255128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.255139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.258918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.258969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.261584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.345 [2024-07-22 11:02:13.261625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.345 [2024-07-22 11:02:13.264766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.345 [2024-07-22 11:02:13.264801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.346 [2024-07-22 11:02:13.264812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.346 [2024-07-22 11:02:13.268302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.346 [2024-07-22 11:02:13.268336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.346 [2024-07-22 11:02:13.268346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.346 [2024-07-22 11:02:13.271906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.346 [2024-07-22 11:02:13.271944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.346 [2024-07-22 11:02:13.271955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.274535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.274574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.274584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.277813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.277847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.277857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.281539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.281573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.281584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.284757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.284791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.284801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.287141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.287179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.287190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.290564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.290604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.290615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.293965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.294000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.604 [2024-07-22 11:02:13.294011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.604 [2024-07-22 11:02:13.296063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.604 [2024-07-22 11:02:13.296096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.296107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.299539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.299576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.299587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.302613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.302650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.302661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.305634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.305667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.305678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.308254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.308296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.308306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.311521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.311560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.311571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.314400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.314438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.314449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.317678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.317711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.317722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.320488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.320522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.320532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.323292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.323329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.323339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.326288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.326324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.326334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.328886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.328920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.328931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.332095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.332134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.332146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.335775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.335814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.335825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.338384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.338421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.338431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.341642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.341675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.341686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.345340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.345374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.348015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.348051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.348061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.351108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.351147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.351158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.354951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.354991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.355002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.358373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.358411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.358422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.360382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.360411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.360438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.364311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.364347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.364357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.367422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.367460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.367471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.369851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.369883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.369909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.373546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.373581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.373592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.376571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.376603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.376614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.379330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.379368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.379378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.382843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.605 [2024-07-22 11:02:13.382893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.605 [2024-07-22 11:02:13.382904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.605 [2024-07-22 11:02:13.386845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.386885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.386895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.390509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.390548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.390558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.393865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.393897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.393924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.396263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.396302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.396329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.399724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.399762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.399773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.403015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.403055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.403082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.405896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.405930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.405941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.408954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.408988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.408999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.412131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.412168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.412179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.415004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.415044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.415055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.418114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.418152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.418163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.420919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.420954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.420965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.423733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.423783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.426969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.427009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.427020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.430235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.430282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.430293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.432826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.432859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.432870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.436306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.436342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.436353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.440142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.440180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.440191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.443910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.443949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.443960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.446328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.446364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.446374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.449414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.449447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.452918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.452956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.452967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.455199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.455237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.455248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.458232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.458278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.461117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.461151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.461161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.463854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.463893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.463904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.467003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.467041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.467052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.470219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.470256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.606 [2024-07-22 11:02:13.470279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.606 [2024-07-22 11:02:13.472932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.606 [2024-07-22 11:02:13.472965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.472976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.476075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.476114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.476125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.478665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.478704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.478714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.481520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.481554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.481564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.484208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.484241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.484252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.487564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.487601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.487611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.491167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.491205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.491216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.493784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.493815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.493826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.497118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.497153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.497163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.500916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.500955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.500966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.503219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.503255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.503277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.506383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.506420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.506431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.509665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.509699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.509709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.513155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.513190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.513200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.515614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.515650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.515661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.519130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.519170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.519181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.522705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.522743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.522754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.525926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.525962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.525973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.528312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.528343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.528354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.531547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.531584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.531610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.607 [2024-07-22 11:02:13.535168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.607 [2024-07-22 11:02:13.535207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.607 [2024-07-22 11:02:13.535234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.538574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.538612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.538623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.540842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.540874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.540884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.543827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.543866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.543876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.546806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.546845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.546855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.549668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.549701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.549711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.552891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.552951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.556010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.556046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.556057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.558360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.558395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.558405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.561990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.562026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.562037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.565855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.565901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.569547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.569581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.569592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.572218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.572250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.572277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.575236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.575284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.575295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.578768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.578807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.578818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.582566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.582604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.582615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.585059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.585099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.585110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.588411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.588449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.588460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.866 [2024-07-22 11:02:13.591742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.866 [2024-07-22 11:02:13.591781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.866 [2024-07-22 11:02:13.591792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.594983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.595021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.597406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.597449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.597459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.600799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.600834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.600860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.603155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.603202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.606535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.606572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.610351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.610387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.610414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.613396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.613438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.613464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.615913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.615970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.619118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.619156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.619181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.622334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.622370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.622381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.625539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.625570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.625596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.627958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.627989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.628015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.631489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.631525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.631551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.634119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.634158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.634168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.637338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.637369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.637380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.640399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.640435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.640445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:05.867 [2024-07-22 11:02:13.643179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22900a0) 01:07:05.867 [2024-07-22 11:02:13.643215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:05.867 [2024-07-22 11:02:13.643241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:05.867 01:07:05.867 Latency(us) 01:07:05.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:05.867 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:07:05.867 nvme0n1 : 2.00 9924.23 1240.53 0.00 0.00 1609.30 463.88 9896.20 01:07:05.867 =================================================================================================================== 01:07:05.867 Total : 9924.23 1240.53 0.00 0.00 1609.30 463.88 9896.20 01:07:05.867 0 01:07:05.867 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:07:05.867 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:07:05.867 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:07:05.867 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:07:05.867 | .driver_specific 01:07:05.867 | .nvme_error 01:07:05.867 | .status_code 01:07:05.867 | .command_transient_transport_error' 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 640 > 0 )) 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112699 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112699 ']' 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112699 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112699 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:06.126 killing process with pid 112699 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112699' 01:07:06.126 Received shutdown signal, test time was about 2.000000 seconds 01:07:06.126 01:07:06.126 Latency(us) 01:07:06.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:06.126 =================================================================================================================== 01:07:06.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112699 01:07:06.126 11:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112699 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112788 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112788 /var/tmp/bperf.sock 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112788 ']' 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:06.384 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:06.384 [2024-07-22 11:02:14.111496] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:07:06.384 [2024-07-22 11:02:14.111565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112788 ] 01:07:06.384 [2024-07-22 11:02:14.228976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:07:06.384 [2024-07-22 11:02:14.252838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:06.384 [2024-07-22 11:02:14.295177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:07.317 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:07.317 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:07:07.317 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:07:07.317 11:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:07:07.317 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:07:07.317 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:07.317 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:07.317 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:07.317 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:07:07.317 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:07:07.575 nvme0n1 01:07:07.575 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:07:07.575 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:07.575 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:07.575 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:07.575 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:07:07.575 11:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:07:07.833 Running I/O for 2 seconds... 01:07:07.833 [2024-07-22 11:02:15.567092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f6458 01:07:07.833 [2024-07-22 11:02:15.567937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.567972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.576474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f3e60 01:07:07.833 [2024-07-22 11:02:15.577433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.577466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.585716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df988 01:07:07.833 [2024-07-22 11:02:15.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.586823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.594978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eb760 01:07:07.833 [2024-07-22 11:02:15.596161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.596194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.602814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e01f8 01:07:07.833 [2024-07-22 11:02:15.604148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.604182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.612250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fef90 01:07:07.833 [2024-07-22 11:02:15.612980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.613013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.620652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eff18 01:07:07.833 [2024-07-22 11:02:15.621226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.621259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.629041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9e10 01:07:07.833 [2024-07-22 11:02:15.629542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.629574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.639125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eff18 01:07:07.833 [2024-07-22 11:02:15.640214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.640247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.647577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df988 01:07:07.833 [2024-07-22 11:02:15.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.648562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.656451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5378 01:07:07.833 [2024-07-22 11:02:15.657158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.657191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.665078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb480 01:07:07.833 [2024-07-22 11:02:15.666036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.666070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.673683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0a68 01:07:07.833 [2024-07-22 11:02:15.674553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.681823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fa7d8 01:07:07.833 [2024-07-22 11:02:15.682549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.682580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.690151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5be8 01:07:07.833 [2024-07-22 11:02:15.690773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.690804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.700593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7970 01:07:07.833 [2024-07-22 11:02:15.701365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.833 [2024-07-22 11:02:15.701396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:07:07.833 [2024-07-22 11:02:15.708643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4578 01:07:07.833 [2024-07-22 11:02:15.710015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.710050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:07:07.834 [2024-07-22 11:02:15.716333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f4f40 01:07:07.834 [2024-07-22 11:02:15.716936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.716967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:07:07.834 [2024-07-22 11:02:15.725198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1f80 01:07:07.834 [2024-07-22 11:02:15.725808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.725839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:07:07.834 [2024-07-22 11:02:15.735923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df550 01:07:07.834 [2024-07-22 11:02:15.737022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.737054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:07:07.834 [2024-07-22 11:02:15.744058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fcdd0 01:07:07.834 [2024-07-22 11:02:15.745014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.745047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:07:07.834 [2024-07-22 11:02:15.752696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df118 01:07:07.834 [2024-07-22 11:02:15.753679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.753711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:07:07.834 [2024-07-22 11:02:15.763201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f1868 01:07:07.834 [2024-07-22 11:02:15.764699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:07.834 [2024-07-22 11:02:15.764728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.769481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4578 01:07:08.093 [2024-07-22 11:02:15.770244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.770281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.780134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f31b8 01:07:08.093 [2024-07-22 11:02:15.781412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.781442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.788451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5658 01:07:08.093 [2024-07-22 11:02:15.789457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.789489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.796986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9e10 01:07:08.093 [2024-07-22 11:02:15.798041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.798072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.805211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f1868 01:07:08.093 [2024-07-22 11:02:15.805997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.813847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190de038 01:07:08.093 [2024-07-22 11:02:15.814658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.814689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.824283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5be8 01:07:08.093 [2024-07-22 11:02:15.825610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.825641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.830617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fd640 01:07:08.093 [2024-07-22 11:02:15.831196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.831226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.841149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fa3a0 01:07:08.093 [2024-07-22 11:02:15.842263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.842301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.849437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb8b8 01:07:08.093 [2024-07-22 11:02:15.850306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.850339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.858025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7da8 01:07:08.093 [2024-07-22 11:02:15.858916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.858945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.868548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190dfdc0 01:07:08.093 [2024-07-22 11:02:15.869953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.869986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.877783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ee190 01:07:08.093 [2024-07-22 11:02:15.879307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.879335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.884102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f2510 01:07:08.093 [2024-07-22 11:02:15.884765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.093 [2024-07-22 11:02:15.884795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:07:08.093 [2024-07-22 11:02:15.893847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f8618 01:07:08.094 [2024-07-22 11:02:15.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.894908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.902914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7da8 01:07:08.094 [2024-07-22 11:02:15.903702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.903733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.911568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e49b0 01:07:08.094 [2024-07-22 11:02:15.912585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.912616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.921274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190de8a8 01:07:08.094 [2024-07-22 11:02:15.922675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.922709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.928380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8088 01:07:08.094 [2024-07-22 11:02:15.929272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.929308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.936732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8088 01:07:08.094 [2024-07-22 11:02:15.937494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.937524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.945205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e23b8 01:07:08.094 [2024-07-22 11:02:15.945883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.945913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.955845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ee5c8 01:07:08.094 [2024-07-22 11:02:15.957111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.957143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.964772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8d30 01:07:08.094 [2024-07-22 11:02:15.966047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.971898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1b48 01:07:08.094 [2024-07-22 11:02:15.972672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.972702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.980382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eaab8 01:07:08.094 [2024-07-22 11:02:15.981032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.981062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.990726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f46d0 01:07:08.094 [2024-07-22 11:02:15.991538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.991571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:15.999225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8d30 01:07:08.094 [2024-07-22 11:02:15.999887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:15.999919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:16.008054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df118 01:07:08.094 [2024-07-22 11:02:16.008944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:16.008975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:07:08.094 [2024-07-22 11:02:16.016976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ea248 01:07:08.094 [2024-07-22 11:02:16.017622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.094 [2024-07-22 11:02:16.017655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.025548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f1430 01:07:08.354 [2024-07-22 11:02:16.026108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.026141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.036079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f6890 01:07:08.354 [2024-07-22 11:02:16.037456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.037486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.042418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190dece0 01:07:08.354 [2024-07-22 11:02:16.043053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.043081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.053117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df550 01:07:08.354 [2024-07-22 11:02:16.054152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.054185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.061568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ec408 01:07:08.354 [2024-07-22 11:02:16.062475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.062505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.070065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9168 01:07:08.354 [2024-07-22 11:02:16.070862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.070893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.080696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb8b8 01:07:08.354 [2024-07-22 11:02:16.082111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.082145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.087053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f2d80 01:07:08.354 [2024-07-22 11:02:16.087619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.087649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.097657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ed4e8 01:07:08.354 [2024-07-22 11:02:16.098838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.098871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.106627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fc560 01:07:08.354 [2024-07-22 11:02:16.107789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.107820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.115032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0a68 01:07:08.354 [2024-07-22 11:02:16.116088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.116119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.123638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eb328 01:07:08.354 [2024-07-22 11:02:16.124682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.124713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.132545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f4f40 01:07:08.354 [2024-07-22 11:02:16.133594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.133634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.140895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fdeb0 01:07:08.354 [2024-07-22 11:02:16.141846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.141877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.149656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7538 01:07:08.354 [2024-07-22 11:02:16.150602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.150632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.160366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eaab8 01:07:08.354 [2024-07-22 11:02:16.161801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.161840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.166661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fda78 01:07:08.354 [2024-07-22 11:02:16.167227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.167257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.178053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e6300 01:07:08.354 [2024-07-22 11:02:16.179500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.179528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.184380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fd208 01:07:08.354 [2024-07-22 11:02:16.184953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.184983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.194383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fe2e8 01:07:08.354 [2024-07-22 11:02:16.195102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.354 [2024-07-22 11:02:16.195137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:08.354 [2024-07-22 11:02:16.203528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190dfdc0 01:07:08.355 [2024-07-22 11:02:16.204620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.204655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.212530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ed4e8 01:07:08.355 [2024-07-22 11:02:16.213617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.213649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.220966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e49b0 01:07:08.355 [2024-07-22 11:02:16.221965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.222002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.229681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df550 01:07:08.355 [2024-07-22 11:02:16.230281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.230313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.238653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e6fa8 01:07:08.355 [2024-07-22 11:02:16.239470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.239501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.247168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8d30 01:07:08.355 [2024-07-22 11:02:16.247879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.247910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.257825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f1868 01:07:08.355 [2024-07-22 11:02:16.259143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.259174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.267148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ea248 01:07:08.355 [2024-07-22 11:02:16.268591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.268619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.273497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f0bc0 01:07:08.355 [2024-07-22 11:02:16.274204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.274235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:07:08.355 [2024-07-22 11:02:16.284177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f8618 01:07:08.355 [2024-07-22 11:02:16.285397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.355 [2024-07-22 11:02:16.285428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.292086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ec840 01:07:08.614 [2024-07-22 11:02:16.293477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.293508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.301640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f0ff8 01:07:08.614 [2024-07-22 11:02:16.302418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.302449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.310119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e27f0 01:07:08.614 [2024-07-22 11:02:16.310736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.310770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.318612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7538 01:07:08.614 [2024-07-22 11:02:16.319135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.319166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.329597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb8b8 01:07:08.614 [2024-07-22 11:02:16.331085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.335949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f57b0 01:07:08.614 [2024-07-22 11:02:16.336572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.336603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.345820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ddc00 01:07:08.614 [2024-07-22 11:02:16.346805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.346836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.354980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fda78 01:07:08.614 [2024-07-22 11:02:16.355726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.355758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.363468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190edd58 01:07:08.614 [2024-07-22 11:02:16.364094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.364125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.374003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f9b30 01:07:08.614 [2024-07-22 11:02:16.375474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.375501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.380334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e3060 01:07:08.614 [2024-07-22 11:02:16.381066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.381097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.389377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5ec8 01:07:08.614 [2024-07-22 11:02:16.390117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.390148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.398592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e73e0 01:07:08.614 [2024-07-22 11:02:16.399098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.399128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.409600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f9b30 01:07:08.614 [2024-07-22 11:02:16.411076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.411105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.415952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9168 01:07:08.614 [2024-07-22 11:02:16.416695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.416723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.426639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5378 01:07:08.614 [2024-07-22 11:02:16.427769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.427801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.435118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7970 01:07:08.614 [2024-07-22 11:02:16.436239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.436275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.443439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eee38 01:07:08.614 [2024-07-22 11:02:16.444318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.444350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.452137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e73e0 01:07:08.614 [2024-07-22 11:02:16.453034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.453064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.462812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0630 01:07:08.614 [2024-07-22 11:02:16.464234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.464273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.469158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f57b0 01:07:08.614 [2024-07-22 11:02:16.469849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.469880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.478550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1b48 01:07:08.614 [2024-07-22 11:02:16.479352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.479382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.489261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f4298 01:07:08.614 [2024-07-22 11:02:16.490591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.490626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.495630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e27f0 01:07:08.614 [2024-07-22 11:02:16.496200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.614 [2024-07-22 11:02:16.496230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:07:08.614 [2024-07-22 11:02:16.506354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ff3c8 01:07:08.615 [2024-07-22 11:02:16.507447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.615 [2024-07-22 11:02:16.507479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:07:08.615 [2024-07-22 11:02:16.515747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fc998 01:07:08.615 [2024-07-22 11:02:16.516966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.615 [2024-07-22 11:02:16.516998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:07:08.615 [2024-07-22 11:02:16.525138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f0ff8 01:07:08.615 [2024-07-22 11:02:16.526492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.615 [2024-07-22 11:02:16.526525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:07:08.615 [2024-07-22 11:02:16.534236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e23b8 01:07:08.615 [2024-07-22 11:02:16.535584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.615 [2024-07-22 11:02:16.535617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:07:08.615 [2024-07-22 11:02:16.541692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f0bc0 01:07:08.615 [2024-07-22 11:02:16.542659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.615 [2024-07-22 11:02:16.542690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.552419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190dfdc0 01:07:08.873 [2024-07-22 11:02:16.553897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.553928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.558802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fe720 01:07:08.873 [2024-07-22 11:02:16.559546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.559575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.568152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eea00 01:07:08.873 [2024-07-22 11:02:16.569011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.569041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.577128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eaab8 01:07:08.873 [2024-07-22 11:02:16.577626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.577659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.586092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f2d80 01:07:08.873 [2024-07-22 11:02:16.586835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.586866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.594569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4578 01:07:08.873 [2024-07-22 11:02:16.595303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.595334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.605214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190de470 01:07:08.873 [2024-07-22 11:02:16.606489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.606522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:07:08.873 [2024-07-22 11:02:16.613078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f31b8 01:07:08.873 [2024-07-22 11:02:16.614504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.873 [2024-07-22 11:02:16.614538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.620910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fef90 01:07:08.874 [2024-07-22 11:02:16.621544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.621578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.631616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f9f68 01:07:08.874 [2024-07-22 11:02:16.632643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.632675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.641693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ef6a8 01:07:08.874 [2024-07-22 11:02:16.643207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.643235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.648036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7538 01:07:08.874 [2024-07-22 11:02:16.648691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.648721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.658708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8d30 01:07:08.874 [2024-07-22 11:02:16.659949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.659981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.666954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f9f68 01:07:08.874 [2024-07-22 11:02:16.667946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.667979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.675476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e01f8 01:07:08.874 [2024-07-22 11:02:16.676392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.676422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.683657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190dfdc0 01:07:08.874 [2024-07-22 11:02:16.684449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.684480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.694361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1f80 01:07:08.874 [2024-07-22 11:02:16.695752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.695781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.700646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f1430 01:07:08.874 [2024-07-22 11:02:16.701333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.701364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.709621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fbcf0 01:07:08.874 [2024-07-22 11:02:16.710313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.710344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.718075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eee38 01:07:08.874 [2024-07-22 11:02:16.718642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.718672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.728886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f3e60 01:07:08.874 [2024-07-22 11:02:16.730093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.730134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.736853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ddc00 01:07:08.874 [2024-07-22 11:02:16.738225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.738262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.746801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0ea0 01:07:08.874 [2024-07-22 11:02:16.747859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.747892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.755254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f96f8 01:07:08.874 [2024-07-22 11:02:16.756309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.756341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.764351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f31b8 01:07:08.874 [2024-07-22 11:02:16.765392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.765422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.771776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f4f40 01:07:08.874 [2024-07-22 11:02:16.772446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.772476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.782474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8d30 01:07:08.874 [2024-07-22 11:02:16.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.783700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.791551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ec408 01:07:08.874 [2024-07-22 11:02:16.792743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.792774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:07:08.874 [2024-07-22 11:02:16.798593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5ec8 01:07:08.874 [2024-07-22 11:02:16.799282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:08.874 [2024-07-22 11:02:16.799311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.807696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e3060 01:07:09.132 [2024-07-22 11:02:16.808397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.808428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.817054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ff3c8 01:07:09.132 [2024-07-22 11:02:16.817855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.817888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.825771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb480 01:07:09.132 [2024-07-22 11:02:16.826598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.826630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.836466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5a90 01:07:09.132 [2024-07-22 11:02:16.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.837845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.842824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eea00 01:07:09.132 [2024-07-22 11:02:16.843431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.843462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.853500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e49b0 01:07:09.132 [2024-07-22 11:02:16.854622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.862807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f8e88 01:07:09.132 [2024-07-22 11:02:16.864041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.864072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.871120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1710 01:07:09.132 [2024-07-22 11:02:16.872096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.879814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f9f68 01:07:09.132 [2024-07-22 11:02:16.880825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.880855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.888733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ed4e8 01:07:09.132 [2024-07-22 11:02:16.889385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.889415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.897240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9168 01:07:09.132 [2024-07-22 11:02:16.897816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.897847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.905263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e6fa8 01:07:09.132 [2024-07-22 11:02:16.905934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.905966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.915939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0ea0 01:07:09.132 [2024-07-22 11:02:16.917104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.917134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.923831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb048 01:07:09.132 [2024-07-22 11:02:16.925172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.925205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:09.132 [2024-07-22 11:02:16.933780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eea00 01:07:09.132 [2024-07-22 11:02:16.934831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.132 [2024-07-22 11:02:16.934861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.942244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9168 01:07:09.133 [2024-07-22 11:02:16.943170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.943200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.951085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f6020 01:07:09.133 [2024-07-22 11:02:16.952133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.960409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e49b0 01:07:09.133 [2024-07-22 11:02:16.961578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.961609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.969451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f4f40 01:07:09.133 [2024-07-22 11:02:16.970628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.970660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.976843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0ea0 01:07:09.133 [2024-07-22 11:02:16.977639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.977669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.986151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5378 01:07:09.133 [2024-07-22 11:02:16.987061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.987090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:16.996825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e3d08 01:07:09.133 [2024-07-22 11:02:16.998255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:16.998293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.003161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190de038 01:07:09.133 [2024-07-22 11:02:17.003728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.003757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.013001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4de8 01:07:09.133 [2024-07-22 11:02:17.013943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.013975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.022512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fb048 01:07:09.133 [2024-07-22 11:02:17.023549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.023581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.030412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ef270 01:07:09.133 [2024-07-22 11:02:17.031740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.031772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.039973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ef6a8 01:07:09.133 [2024-07-22 11:02:17.040686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.040717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.048653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190efae0 01:07:09.133 [2024-07-22 11:02:17.049540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.049572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:07:09.133 [2024-07-22 11:02:17.057361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ee5c8 01:07:09.133 [2024-07-22 11:02:17.058169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.133 [2024-07-22 11:02:17.058201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.065838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f6458 01:07:09.391 [2024-07-22 11:02:17.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.066666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.076504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0ea0 01:07:09.391 [2024-07-22 11:02:17.077814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.077845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.082850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1710 01:07:09.391 [2024-07-22 11:02:17.083428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.083458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.093507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f9b30 01:07:09.391 [2024-07-22 11:02:17.094472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.101966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fd640 01:07:09.391 [2024-07-22 11:02:17.102798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.102828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.112626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fc128 01:07:09.391 [2024-07-22 11:02:17.114090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.114121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.118966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fa3a0 01:07:09.391 [2024-07-22 11:02:17.119696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.119726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.129626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e01f8 01:07:09.391 [2024-07-22 11:02:17.130867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.130898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.137549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5378 01:07:09.391 [2024-07-22 11:02:17.138947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.145280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4de8 01:07:09.391 [2024-07-22 11:02:17.145910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.145941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.155931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5220 01:07:09.391 [2024-07-22 11:02:17.157063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.157093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.164229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df988 01:07:09.391 [2024-07-22 11:02:17.165126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.165159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.172932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5220 01:07:09.391 [2024-07-22 11:02:17.173852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.173882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.183606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4de8 01:07:09.391 [2024-07-22 11:02:17.185019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.185049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.189949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f96f8 01:07:09.391 [2024-07-22 11:02:17.190642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.190671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.200616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f3e60 01:07:09.391 [2024-07-22 11:02:17.201827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.201859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.209663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eb760 01:07:09.391 [2024-07-22 11:02:17.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.210899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.218191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ee190 01:07:09.391 [2024-07-22 11:02:17.219284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.219315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.226255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1b48 01:07:09.391 [2024-07-22 11:02:17.227035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.227067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.234954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f8618 01:07:09.391 [2024-07-22 11:02:17.235665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.235696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.243867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df988 01:07:09.391 [2024-07-22 11:02:17.244324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.244356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.252773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fc998 01:07:09.391 [2024-07-22 11:02:17.253475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.253504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.261258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f6020 01:07:09.391 [2024-07-22 11:02:17.261843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.261874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.272729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5220 01:07:09.391 [2024-07-22 11:02:17.274175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.279065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e6738 01:07:09.391 [2024-07-22 11:02:17.279645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.279675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.289331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f8e88 01:07:09.391 [2024-07-22 11:02:17.290068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.290101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.297990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5378 01:07:09.391 [2024-07-22 11:02:17.298906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.298938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.306703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ef270 01:07:09.391 [2024-07-22 11:02:17.307538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.391 [2024-07-22 11:02:17.307569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:07:09.391 [2024-07-22 11:02:17.315163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190df988 01:07:09.391 [2024-07-22 11:02:17.315986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.392 [2024-07-22 11:02:17.316016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.324081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e4578 01:07:09.649 [2024-07-22 11:02:17.324542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.324573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.335150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e0a68 01:07:09.649 [2024-07-22 11:02:17.336609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.336636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.341482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e5220 01:07:09.649 [2024-07-22 11:02:17.342214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.342244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.350830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7da8 01:07:09.649 [2024-07-22 11:02:17.351680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.351713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.359844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e3498 01:07:09.649 [2024-07-22 11:02:17.360335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.360367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.368718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fcdd0 01:07:09.649 [2024-07-22 11:02:17.369469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.369501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.377161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eaef0 01:07:09.649 [2024-07-22 11:02:17.377919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.649 [2024-07-22 11:02:17.377953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:07:09.649 [2024-07-22 11:02:17.387871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e9168 01:07:09.649 [2024-07-22 11:02:17.389117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.389149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.396224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190feb58 01:07:09.650 [2024-07-22 11:02:17.397208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.397241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.404952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e49b0 01:07:09.650 [2024-07-22 11:02:17.405991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.406026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.413976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e2c28 01:07:09.650 [2024-07-22 11:02:17.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.414658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.422517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f7970 01:07:09.650 [2024-07-22 11:02:17.423070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.423101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.432722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e8088 01:07:09.650 [2024-07-22 11:02:17.433892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.433925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.440989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e1f80 01:07:09.650 [2024-07-22 11:02:17.441989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.449708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190fac10 01:07:09.650 [2024-07-22 11:02:17.450623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.450656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.458193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190eea00 01:07:09.650 [2024-07-22 11:02:17.458973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.459004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.467116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e6fa8 01:07:09.650 [2024-07-22 11:02:17.467655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.467686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.475650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190ebfd0 01:07:09.650 [2024-07-22 11:02:17.476092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.476124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.484894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190efae0 01:07:09.650 [2024-07-22 11:02:17.485421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.485451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.493434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190de038 01:07:09.650 [2024-07-22 11:02:17.493879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.493908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.503600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e6fa8 01:07:09.650 [2024-07-22 11:02:17.504635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.504665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.512058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e3060 01:07:09.650 [2024-07-22 11:02:17.512967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.512998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.520361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f0350 01:07:09.650 [2024-07-22 11:02:17.521105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.521136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.529042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190dfdc0 01:07:09.650 [2024-07-22 11:02:17.529857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.529888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.539711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190e88f8 01:07:09.650 [2024-07-22 11:02:17.541020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.541051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:07:09.650 [2024-07-22 11:02:17.546056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24baf80) with pdu=0x2000190f5378 01:07:09.650 [2024-07-22 11:02:17.546636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:07:09.650 [2024-07-22 11:02:17.546665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:07:09.650 01:07:09.650 Latency(us) 01:07:09.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:09.650 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:09.650 nvme0n1 : 2.00 28543.76 111.50 0.00 0.00 4480.15 1829.22 14423.18 01:07:09.650 =================================================================================================================== 01:07:09.650 Total : 28543.76 111.50 0.00 0.00 4480.15 1829.22 14423.18 01:07:09.650 0 01:07:09.650 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:07:09.650 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:07:09.650 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:07:09.650 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:07:09.650 | .driver_specific 01:07:09.650 | .nvme_error 01:07:09.650 | .status_code 01:07:09.650 | .command_transient_transport_error' 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112788 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112788 ']' 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112788 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112788 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112788' 01:07:09.923 killing process with pid 112788 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112788 01:07:09.923 Received shutdown signal, test time was about 2.000000 seconds 01:07:09.923 01:07:09.923 Latency(us) 01:07:09.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:09.923 =================================================================================================================== 01:07:09.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:09.923 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112788 01:07:10.180 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112868 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112868 /var/tmp/bperf.sock 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112868 ']' 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:07:10.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:10.181 11:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:10.181 I/O size of 131072 is greater than zero copy threshold (65536). 01:07:10.181 Zero copy mechanism will not be used. 01:07:10.181 [2024-07-22 11:02:18.027354] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:07:10.181 [2024-07-22 11:02:18.027438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112868 ] 01:07:10.438 [2024-07-22 11:02:18.145838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:07:10.438 [2024-07-22 11:02:18.159230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:10.438 [2024-07-22 11:02:18.199036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:11.002 11:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:11.002 11:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:07:11.002 11:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:07:11.002 11:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:07:11.261 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:07:11.261 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:11.261 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:11.261 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:11.261 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:07:11.261 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:07:11.519 nvme0n1 01:07:11.519 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:07:11.519 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:11.519 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:11.519 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:11.519 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:07:11.519 11:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:07:11.519 I/O size of 131072 is greater than zero copy threshold (65536). 01:07:11.519 Zero copy mechanism will not be used. 01:07:11.519 Running I/O for 2 seconds... 01:07:11.519 [2024-07-22 11:02:19.446945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.519 [2024-07-22 11:02:19.447361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.519 [2024-07-22 11:02:19.447388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.519 [2024-07-22 11:02:19.451086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.519 [2024-07-22 11:02:19.451475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.519 [2024-07-22 11:02:19.451505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.454985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.455376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.455402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.458865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.459262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.459298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.462783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.463143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.463176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.466721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.467098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.467129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.470633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.470996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.471028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.474556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.474935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.474967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.478494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.478861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.478892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.482370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.482736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.482757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.486220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.486588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.486621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.490093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.490464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.490491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.494038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.494426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.494458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.497955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.498338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.498364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.501830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.502196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.502228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.505757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.506120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.506146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.509632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.509995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.510022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.513504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.513867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.513900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.517354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.517714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.517751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.521210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.521579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.521605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.525041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.525440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.525467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.529003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.529378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.529406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.532946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.533307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.533335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.536855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.537226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.537257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.540789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.541156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.541186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.544679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.545050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.545081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.548571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.548948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.548976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.552446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.552820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.552852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.556360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.556730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.814 [2024-07-22 11:02:19.556762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.814 [2024-07-22 11:02:19.560227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.814 [2024-07-22 11:02:19.560620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.560650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.564071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.564454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.564480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.567887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.568297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.568326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.571790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.572179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.572207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.575714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.576080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.579550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.579926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.579957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.583473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.583825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.583858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.587368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.587734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.587761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.591210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.591576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.591604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.595090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.595456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.595482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.598972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.599356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.599380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.602843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.603219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.603246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.606730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.607090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.607123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.610559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.610926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.610957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.614438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.614783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.614814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.618289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.618651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.618683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.622125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.622509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.622541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.626049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.626429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.626455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.629914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.630288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.630320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.633804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.634154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.634181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.637683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.638060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.638086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.641539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.641905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.641937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.645387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.645748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.645773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.649223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.649606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.649644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.653114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.653493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.653524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.656974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.657328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.657361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.660856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.661206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.661233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.664719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.665086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.665117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.668614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.668979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.669011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.672433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.672820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.672848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.676381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.676759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.676791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.680191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.680586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.680614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.684023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.684396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.684423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.687934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.688303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.688330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.691746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.692126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.692154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.695669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.696059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.696087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.699600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.699973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.700003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.703510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.703898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.703925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.815 [2024-07-22 11:02:19.707402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.815 [2024-07-22 11:02:19.707766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.815 [2024-07-22 11:02:19.707798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.816 [2024-07-22 11:02:19.711279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.816 [2024-07-22 11:02:19.711655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.816 [2024-07-22 11:02:19.711695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:11.816 [2024-07-22 11:02:19.715175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.816 [2024-07-22 11:02:19.715572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.816 [2024-07-22 11:02:19.715600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:11.816 [2024-07-22 11:02:19.719064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.816 [2024-07-22 11:02:19.719448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.816 [2024-07-22 11:02:19.719475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:11.816 [2024-07-22 11:02:19.722937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.816 [2024-07-22 11:02:19.723315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.816 [2024-07-22 11:02:19.723341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:11.816 [2024-07-22 11:02:19.726823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:11.816 [2024-07-22 11:02:19.727176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:11.816 [2024-07-22 11:02:19.727203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.730638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.730990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.731017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.734514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.734889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.734923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.738458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.738830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.738859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.742378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.742746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.742777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.746249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.746636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.746668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.750170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.750554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.750583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.754058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.754436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.754471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.757923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.758278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.758303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.761807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.762184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.762216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.765701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.766091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.766118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.769563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.769938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.769972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.773427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.773802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.773828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.777308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.777670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.777702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.781176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.781544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.781581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.784987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.785361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.785387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.788891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.789250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.789286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.792735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.793091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.793114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.796599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.796974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.797005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.800504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.800872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.800903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.804352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.804723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.804752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.808234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.808611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.808638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.812104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.812471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.812497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.815953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.816332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.094 [2024-07-22 11:02:19.816358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.094 [2024-07-22 11:02:19.819800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.094 [2024-07-22 11:02:19.820163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.820190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.823690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.824046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.824072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.827528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.827876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.827910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.831424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.831798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.831830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.835326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.835675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.835706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.839212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.839616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.839646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.843133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.843501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.843527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.846998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.847396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.847423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.850888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.851288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.851335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.854806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.855180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.855212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.858719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.859077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.859109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.862605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.862976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.863008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.866506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.866878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.866911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.870423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.870793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.870824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.874261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.874644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.874673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.878133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.878519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.878551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.881951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.882327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.882353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.885823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.886189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.886220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.889673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.890037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.890064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.893533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.893908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.893952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.897424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.897795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.897821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.901287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.901653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.901687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.905166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.905533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.905571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.909048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.909405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.909432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.912890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.913244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.913278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.916762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.917137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.917166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.920620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.921003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.921031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.924504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.924871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.924903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.928348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.928710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.095 [2024-07-22 11:02:19.928739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.095 [2024-07-22 11:02:19.932185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.095 [2024-07-22 11:02:19.932549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.932575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.936029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.936398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.936421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.939895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.940278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.940305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.943802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.944171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.944204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.947635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.948021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.948049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.951560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.951917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.951944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.955498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.955880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.955911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.959410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.959801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.959833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.963334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.963698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.963730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.967229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.967614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.967650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.971104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.971472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.971499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.974973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.975352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.975378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.978869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.979236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.979275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.982710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.983074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.983106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.986586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.986936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.986963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.990443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.990814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.990845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.994329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.994709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.994740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:19.998225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:19.998612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:19.998639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:20.002083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:20.002459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:20.002504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:20.006011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:20.006434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:20.006462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:20.010158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:20.010532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:20.010565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:20.014011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:20.014372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:20.014403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:20.017884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:20.018255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:20.018298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.096 [2024-07-22 11:02:20.021685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.096 [2024-07-22 11:02:20.022044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.096 [2024-07-22 11:02:20.022070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.025531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.025907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.025941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.029445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.029821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.029847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.033516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.033879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.033911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.037364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.037736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.037784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.041235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.041602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.041630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.045041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.045418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.045450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.048975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.049355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.052873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.053222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.053250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.056738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.057095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.057127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.060608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.060976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.061009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.064513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.064884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.064915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.068419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.068776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.068807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.072281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.356 [2024-07-22 11:02:20.072666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.356 [2024-07-22 11:02:20.076225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.356 [2024-07-22 11:02:20.076600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.076626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.080114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.080491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.080521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.084024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.084395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.084424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.087860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.088214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.088244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.091739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.092109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.092139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.095682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.096056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.096086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.099616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.099978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.100009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.103445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.103809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.103839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.107312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.107667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.107697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.111158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.111517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.111549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.115038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.115419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.115443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.118912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.119286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.119312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.122834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.123201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.123231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.126728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.127085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.127115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.130591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.130948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.130979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.134479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.134836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.134861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.138400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.138775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.138805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.142340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.142746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.146246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.146625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.150125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.150496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.150523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.154065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.154426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.154453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.157935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.158322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.158351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.161853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.162235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.162274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.165767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.166141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.169698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.170080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.170109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.173588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.173943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.173972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.177423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.177777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.177880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.181343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.181710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.181739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.185230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.185604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.357 [2024-07-22 11:02:20.185630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.357 [2024-07-22 11:02:20.189068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.357 [2024-07-22 11:02:20.189424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.189462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.192897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.193281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.193311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.196758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.197126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.197149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.200686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.201053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.201083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.204571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.204926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.204956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.208466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.208828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.208860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.212388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.212753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.212785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.216287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.216654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.216683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.220149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.220514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.220540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.223977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.224364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.224390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.227903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.228293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.228313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.231768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.232142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.232170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.235708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.236074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.236113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.239655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.240033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.240060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.243564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.243930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.243963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.247455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.247842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.251352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.251705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.251737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.255205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.255601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.255631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.259122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.259512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.262978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.263367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.263394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.266876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.267253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.267291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.270765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.271113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.271146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.274600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.274975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.275008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.278514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.278883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.278915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.282444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.282814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.282846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.358 [2024-07-22 11:02:20.286344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.358 [2024-07-22 11:02:20.286710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.358 [2024-07-22 11:02:20.286742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.290189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.290552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.290578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.294126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.294515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.294546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.298052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.298440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.298466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.301884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.302258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.302298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.305852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.306228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.306259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.309727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.310091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.310118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.313587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.313954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.313987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.317509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.317890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.317916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.321373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.321751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.321782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.325252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.325631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.325658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.329145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.329539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.329566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.333032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.333420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.333448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.336939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.337311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.337337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.340793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.341163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.344653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.345011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.345038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.348512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.348893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.620 [2024-07-22 11:02:20.348924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.620 [2024-07-22 11:02:20.352425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.620 [2024-07-22 11:02:20.352804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.352831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.356400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.356771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.356802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.360240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.360624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.360659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.364133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.364515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.364541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.367997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.368414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.371863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.372226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.372254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.375753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.376088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.376115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.379485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.379819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.379850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.383187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.383538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.383565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.386859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.387214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.390632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.390962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.390990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.394307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.394626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.394659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.397974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.398316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.401683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.402038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.402065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.405385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.405708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.405740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.409095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.409445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.409478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.412809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.413139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.413167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.416483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.416810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.416839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.420115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.420454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.420474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.423792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.424131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.424155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.427434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.427765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.427792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.431168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.431511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.431537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.434845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.435185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.435214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.438507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.438844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.438882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.442567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.442906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.442938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.446205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.446560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.446602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.449899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.450229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.450249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.453551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.453883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.453910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.457199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.621 [2024-07-22 11:02:20.457528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.621 [2024-07-22 11:02:20.457554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.621 [2024-07-22 11:02:20.460872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.461232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.461277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.464617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.464953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.464981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.468386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.468729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.468760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.472041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.472385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.472420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.475719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.476048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.476076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.479426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.479748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.479784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.483125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.483464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.483499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.486891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.487225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.487252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.490651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.490981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.491009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.494342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.494676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.494708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.498002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.498329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.498352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.501590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.501940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.502010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.505309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.505633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.505662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.509008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.509340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.509363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.512600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.512913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.512951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.516256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.516604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.519928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.520259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.520289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.523613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.523942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.523969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.527323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.527643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.527673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.531008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.531368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.531398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.534722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.535063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.535094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.538418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.538777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.542039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.542374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.542400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.545681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.546040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.546065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.622 [2024-07-22 11:02:20.549354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.622 [2024-07-22 11:02:20.549678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.622 [2024-07-22 11:02:20.549706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.882 [2024-07-22 11:02:20.552983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.882 [2024-07-22 11:02:20.553329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.882 [2024-07-22 11:02:20.553352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.882 [2024-07-22 11:02:20.556708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.882 [2024-07-22 11:02:20.557034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.882 [2024-07-22 11:02:20.557053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.882 [2024-07-22 11:02:20.560251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.882 [2024-07-22 11:02:20.560595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.882 [2024-07-22 11:02:20.560621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.882 [2024-07-22 11:02:20.563977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.882 [2024-07-22 11:02:20.564317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.882 [2024-07-22 11:02:20.564337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.882 [2024-07-22 11:02:20.567679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.882 [2024-07-22 11:02:20.567995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.882 [2024-07-22 11:02:20.568031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.882 [2024-07-22 11:02:20.571400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.882 [2024-07-22 11:02:20.571736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.571768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.575119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.575488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.575514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.578830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.579193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.579221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.582525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.582856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.582884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.586136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.586479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.586509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.589800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.590117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.590145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.593481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.593818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.593847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.597174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.597526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.597553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.600882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.601210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.601238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.604585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.604916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.604943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.608263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.608609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.608634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.611959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.612309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.612331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.615674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.616001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.616029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.619368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.619692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.623053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.623398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.623425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.626774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.627112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.627142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.630483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.630822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.630863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.634195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.634540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.634560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.637927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.638253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.638287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.641596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.641936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.645346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.645670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.645699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.649041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.649379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.649399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.652663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.652983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.653020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.656390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.656734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.656766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.660064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.660415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.660447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.663795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.664129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.664157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.667434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.667755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.667785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.671106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.671447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.671484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.674819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.675139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.675168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.678448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.678766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.678792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.682114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.682460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.682486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.685821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.686151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.686172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.689480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.689813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.689834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.693152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.693492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.693512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.696787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.697105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.697129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.883 [2024-07-22 11:02:20.700477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.883 [2024-07-22 11:02:20.700809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.883 [2024-07-22 11:02:20.700836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.704148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.704494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.704520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.707796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.708125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.708143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.711451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.711769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.711805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.715150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.715490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.715516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.718878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.719214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.719241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.722611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.722941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.722969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.726259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.726598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.726624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.729888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.730198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.730229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.733521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.733862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.733884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.737164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.737508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.737535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.740879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.741207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.741234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.744479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.744811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.744834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.748194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.748531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.748552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.751919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.752242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.752281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.755640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.755971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.755998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.759331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.759669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.759696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.763033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.763372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.763393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.766750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.767085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.767113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.770428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.770752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.770779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.774096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.774439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.774476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.777836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.778161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.778181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.781488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.781812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.781832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.785140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.785487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.788825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.789154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.789181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.792500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.792829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.792855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.796225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.796569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.796596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.799933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.800258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.800297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.803594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.803932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.807261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.807603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.807630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:12.884 [2024-07-22 11:02:20.810957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:12.884 [2024-07-22 11:02:20.811291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:12.884 [2024-07-22 11:02:20.811312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.143 [2024-07-22 11:02:20.814669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.143 [2024-07-22 11:02:20.814986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.143 [2024-07-22 11:02:20.815024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.143 [2024-07-22 11:02:20.818370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.143 [2024-07-22 11:02:20.818706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.143 [2024-07-22 11:02:20.818747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.143 [2024-07-22 11:02:20.822054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.143 [2024-07-22 11:02:20.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.143 [2024-07-22 11:02:20.822408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.143 [2024-07-22 11:02:20.825787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.143 [2024-07-22 11:02:20.826102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.826129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.829469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.829809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.829833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.833075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.833410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.833430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.836797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.837119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.837149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.840463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.840776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.840800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.844076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.844419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.844440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.847775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.848114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.848142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.851409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.851737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.851765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.855074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.855409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.855432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.858753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.859075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.859104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.862433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.862798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.866137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.866479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.866509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.869894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.870227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.870253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.873573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.873907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.873926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.877284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.877593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.877613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.880926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.881254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.881293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.884603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.884933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.884952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.888276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.888599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.888632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.891940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.892286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.892311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.895665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.896006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.896026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.899386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.899713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.899741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.903139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.903465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.903484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.906879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.907210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.907237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.910529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.910860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.910887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.914186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.914518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.914544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.917861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.918181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.918211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.921526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.921863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.921882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.925148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.925484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.925510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.928781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.929110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.929137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.932465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.932783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.932820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.936141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.936481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.936508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.939854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.940182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.940210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.943515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.943843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.943870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.947197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.947554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.950832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.951163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.951197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.954512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.954842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.954863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.958180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.958532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.961889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.962207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.962235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.965539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.965872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.965895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.969236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.969572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.969599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.972899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.973215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.973254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.976597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.976930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.976956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.980292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.980625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.144 [2024-07-22 11:02:20.980644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.144 [2024-07-22 11:02:20.983971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.144 [2024-07-22 11:02:20.984307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:20.984328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:20.987665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:20.987983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:20.988007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:20.991234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:20.991577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:20.991603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:20.994954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:20.995299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:20.995337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:20.998676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:20.999010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:20.999037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.002388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.002722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.002745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.006082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.006426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.006445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.009861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.010184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.010211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.013573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.013897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.013926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.017229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.017576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.017602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.020886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.021215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.021239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.024555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.024885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.024905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.028261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.028607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.028627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.031937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.032288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.032307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.035645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.035971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.035995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.039298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.039640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.039667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.042962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.043305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.043328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.046669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.046998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.047029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.050404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.050722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.050762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.054096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.054444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.054480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.057833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.058182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.061533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.061861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.061884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.065152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.065487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.065516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.068822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.069134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.069153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.145 [2024-07-22 11:02:21.072408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.145 [2024-07-22 11:02:21.072745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.145 [2024-07-22 11:02:21.072778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.076096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.076442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.076461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.079751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.080081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.080100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.083424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.083790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.087097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.087428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.087461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.090766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.091100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.091138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.094468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.094799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.094826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.098165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.098507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.098535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.101785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.102113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.105394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.105719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.105760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.109016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.109355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.112695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.113015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.113035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.116349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.116693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.116720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.120047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.120391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.120411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.123696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.124010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.124036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.127448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.127763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.127787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.131176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.131505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.131526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.134796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.135121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.135160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.138526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.138842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.138870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.142212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.142532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.142562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.145814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.146178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.149555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.149893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.149917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.153249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.153585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.153611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.156903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.157217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.157245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.160691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.161025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.161053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.164417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.164744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.164768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.168131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.168475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.168502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.171855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.172193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.172220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.175551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.175864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.175891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.179238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.179564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.179591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.182853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.183166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.183195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.186517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.186830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.186848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.190099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.190438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.193774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.194086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.194105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.197452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.197791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.201197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.201531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.201550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.204942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.205251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.205281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.208749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.209053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.209071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.212411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.212725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.212744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.215977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.216318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.216337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.219593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.219930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.219967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.223236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.223572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.223593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.226896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.227218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.227237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.230622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.230943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.230967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.234287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.234623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.234643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.237933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.405 [2024-07-22 11:02:21.238240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.405 [2024-07-22 11:02:21.238292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.405 [2024-07-22 11:02:21.241687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.242029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.242053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.245336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.245641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.245663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.248955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.249292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.249311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.252670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.252975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.252994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.256243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.256582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.256605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.259924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.260241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.260261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.263560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.263885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.263906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.267208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.267541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.267561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.270853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.271176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.271196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.274508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.274828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.274854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.278116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.278456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.278476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.281737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.282080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.282100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.285513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.285852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.285878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.289199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.289536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.289566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.292901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.293213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.293231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.296559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.296896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.296920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.300280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.300605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.300642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.303957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.304295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.304315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.307551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.307865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.307885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.311278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.311615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.314923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.315260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.315299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.318651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.318992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.319011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.322395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.322728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.322763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.326192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.326544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.326564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.329960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.330294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.330314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.406 [2024-07-22 11:02:21.333648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.406 [2024-07-22 11:02:21.333985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.406 [2024-07-22 11:02:21.334005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.337340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.337670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.337689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.340983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.341333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.341352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.344699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.345033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.345067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.348496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.348828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.348846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.352197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.352538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.352567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.355924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.356239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.356262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.359603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.359945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.359980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.363330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.363667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.363690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.366939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.367287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.370729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.371055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.371087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.374442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.374764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.374802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.378122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.378467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.378486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.381852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.382177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.382205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.385582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.385924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.385949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.389301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.389626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.389645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.392984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.393318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.393337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.396691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.397024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.397043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.400438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.400775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.400797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.404158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.404496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.404529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.407853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.408171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.408191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.411466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.411820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.415122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.415475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.415505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.418852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.419188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.419217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.422640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.422970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.423002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.426354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.426669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.426696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.430058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.430399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.430420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:07:13.664 [2024-07-22 11:02:21.433713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24bb2c0) with pdu=0x2000190fef90 01:07:13.664 [2024-07-22 11:02:21.433929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:07:13.664 [2024-07-22 11:02:21.433947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:07:13.664 01:07:13.664 Latency(us) 01:07:13.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:13.664 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:07:13.664 nvme0n1 : 2.00 8166.25 1020.78 0.00 0.00 1955.75 1585.76 9053.97 01:07:13.664 =================================================================================================================== 01:07:13.664 Total : 8166.25 1020.78 0.00 0.00 1955.75 1585.76 9053.97 01:07:13.664 0 01:07:13.664 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:07:13.664 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:07:13.665 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:07:13.665 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:07:13.665 | .driver_specific 01:07:13.665 | .nvme_error 01:07:13.665 | .status_code 01:07:13.665 | .command_transient_transport_error' 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 527 > 0 )) 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112868 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112868 ']' 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112868 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112868 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:13.921 killing process with pid 112868 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112868' 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112868 01:07:13.921 Received shutdown signal, test time was about 2.000000 seconds 01:07:13.921 01:07:13.921 Latency(us) 01:07:13.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:13.921 =================================================================================================================== 01:07:13.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:13.921 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112868 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 112570 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112570 ']' 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112570 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112570 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:07:14.190 killing process with pid 112570 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112570' 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112570 01:07:14.190 11:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112570 01:07:14.190 01:07:14.190 real 0m17.047s 01:07:14.190 user 0m31.248s 01:07:14.190 sys 0m4.847s 01:07:14.190 11:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:14.190 11:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:07:14.190 ************************************ 01:07:14.190 END TEST nvmf_digest_error 01:07:14.190 ************************************ 01:07:14.190 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 01:07:14.190 11:02:22 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:07:14.190 11:02:22 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:07:14.191 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:14.191 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 01:07:14.448 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:14.448 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 01:07:14.448 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:14.449 rmmod nvme_tcp 01:07:14.449 rmmod nvme_fabrics 01:07:14.449 rmmod nvme_keyring 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 112570 ']' 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 112570 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 112570 ']' 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 112570 01:07:14.449 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (112570) - No such process 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 112570 is not found' 01:07:14.449 Process with pid 112570 is not found 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:14.449 01:07:14.449 real 0m35.071s 01:07:14.449 user 1m2.648s 01:07:14.449 sys 0m10.262s 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:14.449 ************************************ 01:07:14.449 END TEST nvmf_digest 01:07:14.449 11:02:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:07:14.449 ************************************ 01:07:14.449 11:02:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:14.449 11:02:22 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 01:07:14.449 11:02:22 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 01:07:14.449 11:02:22 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:07:14.449 11:02:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:14.449 11:02:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:14.449 11:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:14.449 ************************************ 01:07:14.449 START TEST nvmf_mdns_discovery 01:07:14.449 ************************************ 01:07:14.449 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:07:14.707 * Looking for test storage... 01:07:14.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:14.707 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:14.708 Cannot find device "nvmf_tgt_br" 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:14.708 Cannot find device "nvmf_tgt_br2" 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:14.708 Cannot find device "nvmf_tgt_br" 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:14.708 Cannot find device "nvmf_tgt_br2" 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:14.708 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:14.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:14.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:14.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:14.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:07:14.966 01:07:14.966 --- 10.0.0.2 ping statistics --- 01:07:14.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:14.966 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:14.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:14.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 01:07:14.966 01:07:14.966 --- 10.0.0.3 ping statistics --- 01:07:14.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:14.966 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:14.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:14.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 01:07:14.966 01:07:14.966 --- 10.0.0.1 ping statistics --- 01:07:14.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:14.966 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:14.966 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:15.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=113162 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 113162 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113162 ']' 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:15.232 11:02:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:07:15.232 [2024-07-22 11:02:22.940108] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:07:15.232 [2024-07-22 11:02:22.940179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:15.232 [2024-07-22 11:02:23.058614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:07:15.232 [2024-07-22 11:02:23.083506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:15.232 [2024-07-22 11:02:23.127012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:15.232 [2024-07-22 11:02:23.127061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:15.232 [2024-07-22 11:02:23.127072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:15.232 [2024-07-22 11:02:23.127080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:15.232 [2024-07-22 11:02:23.127086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:15.232 [2024-07-22 11:02:23.127110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 [2024-07-22 11:02:23.959354] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 [2024-07-22 11:02:23.971443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 null0 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 null1 01:07:16.164 11:02:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 null2 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 null3 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=113212 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 113212 /tmp/host.sock 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113212 ']' 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:16.164 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:07:16.164 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:07:16.165 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:16.165 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.165 [2024-07-22 11:02:24.082910] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:07:16.165 [2024-07-22 11:02:24.083147] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113212 ] 01:07:16.422 [2024-07-22 11:02:24.201561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:07:16.422 [2024-07-22 11:02:24.227171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:16.422 [2024-07-22 11:02:24.270496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:17.357 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:17.357 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 01:07:17.357 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 01:07:17.357 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 01:07:17.357 11:02:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 01:07:17.357 11:02:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=113240 01:07:17.357 11:02:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 01:07:17.357 11:02:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 01:07:17.357 11:02:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 01:07:17.357 Process 979 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 01:07:17.357 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 01:07:17.357 Successfully dropped root privileges. 01:07:17.357 avahi-daemon 0.8 starting up. 01:07:17.357 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 01:07:18.322 Successfully called chroot(). 01:07:18.322 Successfully dropped remaining capabilities. 01:07:18.322 No service file found in /etc/avahi/services. 01:07:18.322 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 01:07:18.322 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 01:07:18.322 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 01:07:18.322 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 01:07:18.322 Network interface enumeration completed. 01:07:18.322 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 01:07:18.322 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 01:07:18.322 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 01:07:18.322 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 01:07:18.322 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 3481165556. 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.322 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:18.323 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 01:07:18.582 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 [2024-07-22 11:02:26.366620] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 [2024-07-22 11:02:26.381235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 [2024-07-22 11:02:26.425106] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 [2024-07-22 11:02:26.437068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:18.583 11:02:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 01:07:19.520 [2024-07-22 11:02:27.265157] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:07:20.088 [2024-07-22 11:02:27.864228] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:20.088 [2024-07-22 11:02:27.864278] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:07:20.088 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:20.088 cookie is 0 01:07:20.088 is_local: 1 01:07:20.088 our_own: 0 01:07:20.088 wide_area: 0 01:07:20.089 multicast: 1 01:07:20.089 cached: 1 01:07:20.089 [2024-07-22 11:02:27.964046] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:20.089 [2024-07-22 11:02:27.964083] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:07:20.089 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:20.089 cookie is 0 01:07:20.089 is_local: 1 01:07:20.089 our_own: 0 01:07:20.089 wide_area: 0 01:07:20.089 multicast: 1 01:07:20.089 cached: 1 01:07:20.089 [2024-07-22 11:02:27.964094] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:07:20.348 [2024-07-22 11:02:28.063875] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:20.348 [2024-07-22 11:02:28.063903] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:07:20.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:20.348 cookie is 0 01:07:20.348 is_local: 1 01:07:20.348 our_own: 0 01:07:20.348 wide_area: 0 01:07:20.348 multicast: 1 01:07:20.348 cached: 1 01:07:20.348 [2024-07-22 11:02:28.163708] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:20.348 [2024-07-22 11:02:28.163730] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:07:20.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:20.348 cookie is 0 01:07:20.348 is_local: 1 01:07:20.348 our_own: 0 01:07:20.348 wide_area: 0 01:07:20.348 multicast: 1 01:07:20.348 cached: 1 01:07:20.348 [2024-07-22 11:02:28.163738] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 01:07:21.286 [2024-07-22 11:02:28.866839] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:07:21.286 [2024-07-22 11:02:28.866875] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:07:21.286 [2024-07-22 11:02:28.866892] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:21.286 [2024-07-22 11:02:28.952781] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 01:07:21.286 [2024-07-22 11:02:29.009466] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:07:21.286 [2024-07-22 11:02:29.009498] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 01:07:21.286 [2024-07-22 11:02:29.066211] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:07:21.286 [2024-07-22 11:02:29.066235] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:07:21.286 [2024-07-22 11:02:29.066248] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:07:21.286 [2024-07-22 11:02:29.153149] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 01:07:21.286 [2024-07-22 11:02:29.208823] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 01:07:21.286 [2024-07-22 11:02:29.208855] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:23.821 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:24.080 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:24.081 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:24.081 11:02:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 01:07:25.017 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:25.018 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:25.018 [2024-07-22 11:02:32.947037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:07:25.018 [2024-07-22 11:02:32.947491] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:25.018 [2024-07-22 11:02:32.947523] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:25.018 [2024-07-22 11:02:32.947551] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:07:25.018 [2024-07-22 11:02:32.947560] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:07:25.277 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:25.277 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 01:07:25.277 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:25.277 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:25.277 [2024-07-22 11:02:32.958974] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:07:25.277 [2024-07-22 11:02:32.959483] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:25.277 [2024-07-22 11:02:32.959527] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:07:25.277 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:25.277 11:02:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 01:07:25.277 [2024-07-22 11:02:33.089329] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 01:07:25.277 [2024-07-22 11:02:33.090325] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 01:07:25.277 [2024-07-22 11:02:33.154408] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 01:07:25.277 [2024-07-22 11:02:33.154432] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:07:25.277 [2024-07-22 11:02:33.154438] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:07:25.277 [2024-07-22 11:02:33.154454] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:07:25.277 [2024-07-22 11:02:33.154485] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:07:25.277 [2024-07-22 11:02:33.154492] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 01:07:25.277 [2024-07-22 11:02:33.154498] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:07:25.277 [2024-07-22 11:02:33.154508] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:25.277 [2024-07-22 11:02:33.200356] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 01:07:25.277 [2024-07-22 11:02:33.200378] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:07:25.277 [2024-07-22 11:02:33.200412] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:07:25.277 [2024-07-22 11:02:33.200418] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:26.214 11:02:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.214 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.474 [2024-07-22 11:02:34.254352] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:26.474 [2024-07-22 11:02:34.254382] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:26.474 [2024-07-22 11:02:34.254409] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:07:26.474 [2024-07-22 11:02:34.254419] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:26.474 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:26.474 [2024-07-22 11:02:34.260646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.474 [2024-07-22 11:02:34.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.474 [2024-07-22 11:02:34.260688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.474 [2024-07-22 11:02:34.260697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.474 [2024-07-22 11:02:34.260707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.474 [2024-07-22 11:02:34.260715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.474 [2024-07-22 11:02:34.260724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.474 [2024-07-22 11:02:34.260733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.474 [2024-07-22 11:02:34.260742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.474 [2024-07-22 11:02:34.266335] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:26.474 [2024-07-22 11:02:34.266375] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:07:26.474 [2024-07-22 11:02:34.269137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.475 [2024-07-22 11:02:34.269164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.475 [2024-07-22 11:02:34.269174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.475 [2024-07-22 11:02:34.269183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.475 [2024-07-22 11:02:34.269192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.475 [2024-07-22 11:02:34.269201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.475 [2024-07-22 11:02:34.269210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:07:26.475 [2024-07-22 11:02:34.269218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:26.475 [2024-07-22 11:02:34.269227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.270596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.475 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:26.475 11:02:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 01:07:26.475 [2024-07-22 11:02:34.279098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.280600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.475 [2024-07-22 11:02:34.280692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.280707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.475 [2024-07-22 11:02:34.280717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.280731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.280743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.280751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.280761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.475 [2024-07-22 11:02:34.280773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.289091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.475 [2024-07-22 11:02:34.289156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.289170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.475 [2024-07-22 11:02:34.289179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.289191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.289203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.289211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.289220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.475 [2024-07-22 11:02:34.289231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.290627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.475 [2024-07-22 11:02:34.290684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.290697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.475 [2024-07-22 11:02:34.290706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.290718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.290729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.290737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.290746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.475 [2024-07-22 11:02:34.290756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.299117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.475 [2024-07-22 11:02:34.299186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.299200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.475 [2024-07-22 11:02:34.299209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.299221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.299232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.299240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.299249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.475 [2024-07-22 11:02:34.299260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.300650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.475 [2024-07-22 11:02:34.300703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.300716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.475 [2024-07-22 11:02:34.300725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.300737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.300748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.300756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.300764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.475 [2024-07-22 11:02:34.300775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.309146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.475 [2024-07-22 11:02:34.309213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.309227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.475 [2024-07-22 11:02:34.309236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.309248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.309259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.309276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.309285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.475 [2024-07-22 11:02:34.309296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.310671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.475 [2024-07-22 11:02:34.310728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.310741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.475 [2024-07-22 11:02:34.310750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.310761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.310772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.310780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.310789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.475 [2024-07-22 11:02:34.310799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.319177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.475 [2024-07-22 11:02:34.319247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.319261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.475 [2024-07-22 11:02:34.319306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.319320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.319344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.319353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.319361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.475 [2024-07-22 11:02:34.319373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.320694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.475 [2024-07-22 11:02:34.320751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.320764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.475 [2024-07-22 11:02:34.320773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.320784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.320796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.320803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.320812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.475 [2024-07-22 11:02:34.320822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.475 [2024-07-22 11:02:34.329207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.475 [2024-07-22 11:02:34.329264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.475 [2024-07-22 11:02:34.329283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.475 [2024-07-22 11:02:34.329292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.475 [2024-07-22 11:02:34.329304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.475 [2024-07-22 11:02:34.329316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.475 [2024-07-22 11:02:34.329323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.475 [2024-07-22 11:02:34.329332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.475 [2024-07-22 11:02:34.329354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.330716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.476 [2024-07-22 11:02:34.330772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.330785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.476 [2024-07-22 11:02:34.330793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.330805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.330816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.330824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.330833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.476 [2024-07-22 11:02:34.330843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.339229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.476 [2024-07-22 11:02:34.339292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.339306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.476 [2024-07-22 11:02:34.339314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.339326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.339350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.339358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.339367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.476 [2024-07-22 11:02:34.339378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.340739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.476 [2024-07-22 11:02:34.340792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.340804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.476 [2024-07-22 11:02:34.340813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.340824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.340836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.340843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.340852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.476 [2024-07-22 11:02:34.340862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.349251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.476 [2024-07-22 11:02:34.349316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.349329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.476 [2024-07-22 11:02:34.349338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.349349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.349372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.349381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.349389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.476 [2024-07-22 11:02:34.349400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.350759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.476 [2024-07-22 11:02:34.350814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.350827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.476 [2024-07-22 11:02:34.350836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.350847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.350858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.350866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.350874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.476 [2024-07-22 11:02:34.350885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.359284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.476 [2024-07-22 11:02:34.359354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.359368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.476 [2024-07-22 11:02:34.359377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.359389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.359414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.359422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.359431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.476 [2024-07-22 11:02:34.359442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.360780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.476 [2024-07-22 11:02:34.360834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.360847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.476 [2024-07-22 11:02:34.360856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.360868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.360879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.360887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.360895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.476 [2024-07-22 11:02:34.360906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.369314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.476 [2024-07-22 11:02:34.369372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.369385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.476 [2024-07-22 11:02:34.369394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.369406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.369429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.369437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.369446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.476 [2024-07-22 11:02:34.369456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.370801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.476 [2024-07-22 11:02:34.370856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.370869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.476 [2024-07-22 11:02:34.370878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.370889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.370901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.370908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.370916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.476 [2024-07-22 11:02:34.370927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.379338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.476 [2024-07-22 11:02:34.379399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.379412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.476 [2024-07-22 11:02:34.379421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.379433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.379456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.379465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.379473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.476 [2024-07-22 11:02:34.379484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.476 [2024-07-22 11:02:34.380823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.476 [2024-07-22 11:02:34.380876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.476 [2024-07-22 11:02:34.380888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.476 [2024-07-22 11:02:34.380897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.476 [2024-07-22 11:02:34.380908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.476 [2024-07-22 11:02:34.380919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.476 [2024-07-22 11:02:34.380927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.476 [2024-07-22 11:02:34.380935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.476 [2024-07-22 11:02:34.380946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.477 [2024-07-22 11:02:34.389364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:07:26.477 [2024-07-22 11:02:34.389421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.477 [2024-07-22 11:02:34.389434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301970 with addr=10.0.0.3, port=4420 01:07:26.477 [2024-07-22 11:02:34.389442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301970 is same with the state(5) to be set 01:07:26.477 [2024-07-22 11:02:34.389454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301970 (9): Bad file descriptor 01:07:26.477 [2024-07-22 11:02:34.389477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:07:26.477 [2024-07-22 11:02:34.389485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:07:26.477 [2024-07-22 11:02:34.389494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:07:26.477 [2024-07-22 11:02:34.389504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.477 [2024-07-22 11:02:34.390843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:07:26.477 [2024-07-22 11:02:34.390897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:26.477 [2024-07-22 11:02:34.390909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1324c80 with addr=10.0.0.2, port=4420 01:07:26.477 [2024-07-22 11:02:34.390918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c80 is same with the state(5) to be set 01:07:26.477 [2024-07-22 11:02:34.390929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1324c80 (9): Bad file descriptor 01:07:26.477 [2024-07-22 11:02:34.390940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:07:26.477 [2024-07-22 11:02:34.390948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:07:26.477 [2024-07-22 11:02:34.390956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:07:26.477 [2024-07-22 11:02:34.390967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:07:26.477 [2024-07-22 11:02:34.398466] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 01:07:26.477 [2024-07-22 11:02:34.398490] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:07:26.477 [2024-07-22 11:02:34.398506] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:26.477 [2024-07-22 11:02:34.398532] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 01:07:26.477 [2024-07-22 11:02:34.398543] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:07:26.477 [2024-07-22 11:02:34.398553] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:07:26.736 [2024-07-22 11:02:34.484395] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:07:26.736 [2024-07-22 11:02:34.484449] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:07:27.670 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:27.671 11:02:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 01:07:27.929 [2024-07-22 11:02:35.651612] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:28.864 [2024-07-22 11:02:36.780833] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 01:07:28.864 2024/07/22 11:02:36 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:07:28.864 request: 01:07:28.864 { 01:07:28.864 "method": "bdev_nvme_start_mdns_discovery", 01:07:28.864 "params": { 01:07:28.864 "name": "mdns", 01:07:28.864 "svcname": "_nvme-disc._http", 01:07:28.864 "hostnqn": "nqn.2021-12.io.spdk:test" 01:07:28.864 } 01:07:28.864 } 01:07:28.864 Got JSON-RPC error response 01:07:28.864 GoRPCClient: error on JSON-RPC call 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:07:28.864 11:02:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 01:07:29.484 [2024-07-22 11:02:37.364593] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:07:29.743 [2024-07-22 11:02:37.464435] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:07:29.743 [2024-07-22 11:02:37.564282] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:29.743 [2024-07-22 11:02:37.564405] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:07:29.743 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:29.743 cookie is 0 01:07:29.743 is_local: 1 01:07:29.743 our_own: 0 01:07:29.743 wide_area: 0 01:07:29.743 multicast: 1 01:07:29.743 cached: 1 01:07:29.743 [2024-07-22 11:02:37.664112] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:29.743 [2024-07-22 11:02:37.664258] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:07:29.743 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:29.743 cookie is 0 01:07:29.743 is_local: 1 01:07:29.743 our_own: 0 01:07:29.743 wide_area: 0 01:07:29.743 multicast: 1 01:07:29.743 cached: 1 01:07:29.743 [2024-07-22 11:02:37.664371] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:07:30.003 [2024-07-22 11:02:37.763950] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:30.003 [2024-07-22 11:02:37.764069] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:07:30.003 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:30.003 cookie is 0 01:07:30.003 is_local: 1 01:07:30.003 our_own: 0 01:07:30.003 wide_area: 0 01:07:30.003 multicast: 1 01:07:30.003 cached: 1 01:07:30.003 [2024-07-22 11:02:37.863788] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:30.003 [2024-07-22 11:02:37.863909] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:07:30.003 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:30.003 cookie is 0 01:07:30.003 is_local: 1 01:07:30.003 our_own: 0 01:07:30.003 wide_area: 0 01:07:30.003 multicast: 1 01:07:30.003 cached: 1 01:07:30.003 [2024-07-22 11:02:37.864064] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 01:07:30.940 [2024-07-22 11:02:38.572413] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:07:30.940 [2024-07-22 11:02:38.572607] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:07:30.940 [2024-07-22 11:02:38.572643] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:30.940 [2024-07-22 11:02:38.658368] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 01:07:30.940 [2024-07-22 11:02:38.718119] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:07:30.940 [2024-07-22 11:02:38.718147] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:07:30.940 [2024-07-22 11:02:38.771913] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:07:30.940 [2024-07-22 11:02:38.771934] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:07:30.940 [2024-07-22 11:02:38.771946] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:07:30.940 [2024-07-22 11:02:38.857886] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 01:07:31.199 [2024-07-22 11:02:38.917685] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 01:07:31.199 [2024-07-22 11:02:38.917727] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 [2024-07-22 11:02:41.987197] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 01:07:34.491 2024/07/22 11:02:41 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:07:34.491 request: 01:07:34.491 { 01:07:34.491 "method": "bdev_nvme_start_mdns_discovery", 01:07:34.491 "params": { 01:07:34.491 "name": "cdc", 01:07:34.491 "svcname": "_nvme-disc._tcp", 01:07:34.491 "hostnqn": "nqn.2021-12.io.spdk:test" 01:07:34.491 } 01:07:34.491 } 01:07:34.491 Got JSON-RPC error response 01:07:34.491 GoRPCClient: error on JSON-RPC call 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:07:34.491 11:02:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 113212 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 113212 01:07:34.491 [2024-07-22 11:02:42.212910] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 113240 01:07:34.491 Got SIGTERM, quitting. 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 01:07:34.491 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 01:07:34.491 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 01:07:34.491 avahi-daemon 0.8 exiting. 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:34.491 rmmod nvme_tcp 01:07:34.491 rmmod nvme_fabrics 01:07:34.491 rmmod nvme_keyring 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 01:07:34.491 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 113162 ']' 01:07:34.492 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 113162 01:07:34.492 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 113162 ']' 01:07:34.492 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 113162 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113162 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:34.750 killing process with pid 113162 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113162' 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 113162 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 113162 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:34.750 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:35.008 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:35.008 01:07:35.008 real 0m20.335s 01:07:35.008 user 0m38.841s 01:07:35.008 sys 0m2.913s 01:07:35.008 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:35.008 11:02:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:35.008 ************************************ 01:07:35.008 END TEST nvmf_mdns_discovery 01:07:35.008 ************************************ 01:07:35.008 11:02:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:35.008 11:02:42 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 01:07:35.008 11:02:42 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:07:35.008 11:02:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:35.008 11:02:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:35.008 11:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:35.008 ************************************ 01:07:35.008 START TEST nvmf_host_multipath 01:07:35.008 ************************************ 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:07:35.008 * Looking for test storage... 01:07:35.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:35.008 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:35.267 Cannot find device "nvmf_tgt_br" 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 01:07:35.267 11:02:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:35.267 Cannot find device "nvmf_tgt_br2" 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:35.267 Cannot find device "nvmf_tgt_br" 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:35.267 Cannot find device "nvmf_tgt_br2" 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:35.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:35.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:35.267 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:35.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:35.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:07:35.525 01:07:35.525 --- 10.0.0.2 ping statistics --- 01:07:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:35.525 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:35.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:35.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 01:07:35.525 01:07:35.525 --- 10.0.0.3 ping statistics --- 01:07:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:35.525 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:35.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:35.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 01:07:35.525 01:07:35.525 --- 10.0.0.1 ping statistics --- 01:07:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:35.525 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:07:35.525 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113797 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113797 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113797 ']' 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:35.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:35.526 11:02:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:07:35.526 [2024-07-22 11:02:43.402542] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:07:35.526 [2024-07-22 11:02:43.402623] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:35.783 [2024-07-22 11:02:43.524159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:07:35.783 [2024-07-22 11:02:43.546439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:07:35.783 [2024-07-22 11:02:43.589841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:35.783 [2024-07-22 11:02:43.589894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:35.783 [2024-07-22 11:02:43.589904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:35.783 [2024-07-22 11:02:43.589912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:35.783 [2024-07-22 11:02:43.589919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:35.783 [2024-07-22 11:02:43.590081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:35.783 [2024-07-22 11:02:43.590091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:36.350 11:02:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:36.350 11:02:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 01:07:36.350 11:02:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:36.350 11:02:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:36.350 11:02:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:36.610 11:02:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:36.610 11:02:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113797 01:07:36.610 11:02:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:07:36.610 [2024-07-22 11:02:44.480950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:36.610 11:02:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:07:36.869 Malloc0 01:07:36.869 11:02:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:07:37.128 11:02:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:37.389 11:02:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:37.389 [2024-07-22 11:02:45.297787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:37.389 11:02:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:07:37.649 [2024-07-22 11:02:45.506139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=113907 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 113907 /var/tmp/bdevperf.sock 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113907 ']' 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:37.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:37.649 11:02:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:38.587 11:02:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:38.587 11:02:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 01:07:38.587 11:02:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:07:38.845 11:02:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 01:07:39.104 Nvme0n1 01:07:39.104 11:02:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:07:39.363 Nvme0n1 01:07:39.621 11:02:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:07:39.621 11:02:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:07:40.558 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:07:40.558 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:07:40.817 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:07:40.817 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:07:40.817 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113989 01:07:40.817 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:40.817 11:02:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:47.399 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:07:47.399 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:47.399 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:07:47.399 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:47.399 Attaching 4 probes... 01:07:47.399 @path[10.0.0.2, 4421]: 19641 01:07:47.399 @path[10.0.0.2, 4421]: 20069 01:07:47.399 @path[10.0.0.2, 4421]: 19899 01:07:47.399 @path[10.0.0.2, 4421]: 20321 01:07:47.399 @path[10.0.0.2, 4421]: 20442 01:07:47.399 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:47.399 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113989 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:07:47.400 11:02:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:07:47.400 11:02:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:07:47.657 11:02:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:07:47.657 11:02:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114120 01:07:47.657 11:02:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:47.657 11:02:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:54.229 Attaching 4 probes... 01:07:54.229 @path[10.0.0.2, 4420]: 20125 01:07:54.229 @path[10.0.0.2, 4420]: 19540 01:07:54.229 @path[10.0.0.2, 4420]: 20773 01:07:54.229 @path[10.0.0.2, 4420]: 20879 01:07:54.229 @path[10.0.0.2, 4420]: 20801 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114120 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114251 01:07:54.229 11:03:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:00.789 Attaching 4 probes... 01:08:00.789 @path[10.0.0.2, 4421]: 16244 01:08:00.789 @path[10.0.0.2, 4421]: 20272 01:08:00.789 @path[10.0.0.2, 4421]: 20236 01:08:00.789 @path[10.0.0.2, 4421]: 20266 01:08:00.789 @path[10.0.0.2, 4421]: 20153 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114251 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114381 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:08:00.789 11:03:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:07.345 Attaching 4 probes... 01:08:07.345 01:08:07.345 01:08:07.345 01:08:07.345 01:08:07.345 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114381 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:08:07.345 11:03:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:08:07.345 11:03:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:08:07.345 11:03:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:08:07.345 11:03:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114508 01:08:07.345 11:03:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:07.345 11:03:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:13.956 Attaching 4 probes... 01:08:13.956 @path[10.0.0.2, 4421]: 17426 01:08:13.956 @path[10.0.0.2, 4421]: 17798 01:08:13.956 @path[10.0.0.2, 4421]: 17763 01:08:13.956 @path[10.0.0.2, 4421]: 17693 01:08:13.956 @path[10.0.0.2, 4421]: 17744 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114508 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:13.956 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:08:13.956 [2024-07-22 11:03:21.675522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.956 [2024-07-22 11:03:21.675984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.675993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 [2024-07-22 11:03:21.676258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ade0 is same with the state(5) to be set 01:08:13.957 11:03:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:08:14.893 11:03:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:08:14.893 11:03:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114645 01:08:14.893 11:03:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:08:14.893 11:03:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:21.465 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:21.466 Attaching 4 probes... 01:08:21.466 @path[10.0.0.2, 4420]: 19910 01:08:21.466 @path[10.0.0.2, 4420]: 20354 01:08:21.466 @path[10.0.0.2, 4420]: 20239 01:08:21.466 @path[10.0.0.2, 4420]: 20236 01:08:21.466 @path[10.0.0.2, 4420]: 22256 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114645 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:21.466 11:03:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:08:21.466 [2024-07-22 11:03:29.173063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:08:21.466 11:03:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:08:21.736 11:03:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:08:28.312 11:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:08:28.312 11:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114837 01:08:28.312 11:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113797 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:08:28.312 11:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:33.583 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:33.583 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:33.842 Attaching 4 probes... 01:08:33.842 @path[10.0.0.2, 4421]: 22011 01:08:33.842 @path[10.0.0.2, 4421]: 22290 01:08:33.842 @path[10.0.0.2, 4421]: 22497 01:08:33.842 @path[10.0.0.2, 4421]: 22368 01:08:33.842 @path[10.0.0.2, 4421]: 22355 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114837 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 113907 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113907 ']' 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113907 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113907 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:08:33.842 killing process with pid 113907 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113907' 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113907 01:08:33.842 11:03:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113907 01:08:33.842 Connection closed with partial response: 01:08:33.842 01:08:33.842 01:08:34.124 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 113907 01:08:34.124 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:08:34.124 [2024-07-22 11:02:45.577893] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:08:34.124 [2024-07-22 11:02:45.577990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113907 ] 01:08:34.124 [2024-07-22 11:02:45.696195] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:08:34.124 [2024-07-22 11:02:45.722095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:34.124 [2024-07-22 11:02:45.768770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:08:34.124 Running I/O for 90 seconds... 01:08:34.124 [2024-07-22 11:02:55.330226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.330921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.330934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.331289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.331302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.333263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.333292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.333312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.333325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.333344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.333374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.333386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.333404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.333417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.124 [2024-07-22 11:02:55.333435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.124 [2024-07-22 11:02:55.333447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.125 [2024-07-22 11:02:55.333478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.333953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.333971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.125 [2024-07-22 11:02:55.333991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.334973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.334985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.335003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.125 [2024-07-22 11:02:55.335016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.125 [2024-07-22 11:02:55.335037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.335528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.335730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.335742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.336419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.126 [2024-07-22 11:02:55.336459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.126 [2024-07-22 11:02:55.336953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.126 [2024-07-22 11:02:55.336971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.336983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:02:55.337444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:02:55.337456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.776603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.776663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.776694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.776724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.776773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.776802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.776820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.776833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.127 [2024-07-22 11:03:01.777058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.127 [2024-07-22 11:03:01.777977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.127 [2024-07-22 11:03:01.777997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.128 [2024-07-22 11:03:01.778977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.778996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.128 [2024-07-22 11:03:01.779009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.779028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.128 [2024-07-22 11:03:01.779041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.128 [2024-07-22 11:03:01.779060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.779970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.779990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.780003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.780036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.129 [2024-07-22 11:03:01.780192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.129 [2024-07-22 11:03:01.780601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.129 [2024-07-22 11:03:01.780614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.780968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.780981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.781018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.781054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:01.781091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:01.781343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:01.781356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.600784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.130 [2024-07-22 11:03:08.600907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.600938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.600952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.600970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.600983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.130 [2024-07-22 11:03:08.601871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.130 [2024-07-22 11:03:08.601884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.601902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.601915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.601933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.601947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.601965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.601978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.601996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.602969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.602982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.131 [2024-07-22 11:03:08.603183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.131 [2024-07-22 11:03:08.603196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.603721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.603734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.604361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.604386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.604408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.604421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.604439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.604452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.132 [2024-07-22 11:03:08.604470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.132 [2024-07-22 11:03:08.604483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.604786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.604799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.605409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.605440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.605506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.605536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.133 [2024-07-22 11:03:08.605567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.133 [2024-07-22 11:03:08.605585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.134 [2024-07-22 11:03:08.605726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.605978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.605991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.606979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.606997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.134 [2024-07-22 11:03:08.607488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.134 [2024-07-22 11:03:08.607501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.607999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.608240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.608253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.624974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.624999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.135 [2024-07-22 11:03:08.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.135 [2024-07-22 11:03:08.625955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.625985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.626959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.626983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.136 [2024-07-22 11:03:08.627667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.136 [2024-07-22 11:03:08.627691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.627959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.627983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.137 [2024-07-22 11:03:08.628000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.628571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.628588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.137 [2024-07-22 11:03:08.629953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.137 [2024-07-22 11:03:08.629977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.629994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.630974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.630991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.138 [2024-07-22 11:03:08.631708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.138 [2024-07-22 11:03:08.631732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.631772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.631813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.631853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.631898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.631939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.631980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.631997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.632022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.632039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.632841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.632875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.632902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.632918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.632941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.632957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.632980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.632996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.633965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.633981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.634003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.634018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.634040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.634056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.634078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.634094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.634116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.634131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.634153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.139 [2024-07-22 11:03:08.634169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.139 [2024-07-22 11:03:08.634192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.140 [2024-07-22 11:03:08.634795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.634969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.634985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.635286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.635303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.140 [2024-07-22 11:03:08.636566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.140 [2024-07-22 11:03:08.636589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.636971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.636986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.141 [2024-07-22 11:03:08.637982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.141 [2024-07-22 11:03:08.637998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.638529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.638545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.639964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.639979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.142 [2024-07-22 11:03:08.640338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.142 [2024-07-22 11:03:08.640360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.640985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.143 [2024-07-22 11:03:08.641255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.641707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.641724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.642479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.642520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.642558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.642604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.642634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.143 [2024-07-22 11:03:08.642664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.143 [2024-07-22 11:03:08.642682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.642976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.642994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.643971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.144 [2024-07-22 11:03:08.643983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.144 [2024-07-22 11:03:08.644001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.644471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.644484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.145 [2024-07-22 11:03:08.645855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.145 [2024-07-22 11:03:08.645873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.645885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.645903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.645916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.645933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.645946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.645963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.645976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.645998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.146 [2024-07-22 11:03:08.646724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.646974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.646992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.647004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.647022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.146 [2024-07-22 11:03:08.647035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.146 [2024-07-22 11:03:08.647053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.647977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.647989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.147 [2024-07-22 11:03:08.648943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.147 [2024-07-22 11:03:08.648956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.648974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.648986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.649659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.649672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.148 [2024-07-22 11:03:08.650863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.148 [2024-07-22 11:03:08.650876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.650893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.650906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.650924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.650936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.650954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.650966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.650984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.650997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.149 [2024-07-22 11:03:08.651934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.651982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.651995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.652012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.652025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.652042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.652055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.652073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.652085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.652103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.652116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.652133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.149 [2024-07-22 11:03:08.652146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.149 [2024-07-22 11:03:08.652164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.652981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.652999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.150 [2024-07-22 11:03:08.653867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.150 [2024-07-22 11:03:08.653885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.653898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.653915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.653932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.653950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.653963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.653980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.653993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.654823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.654835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.151 [2024-07-22 11:03:08.655783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.151 [2024-07-22 11:03:08.655797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.655827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.655857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.655888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.655918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.655949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.655980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.655997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.656970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.656983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.657001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.657014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.657031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.657044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.657061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.657074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.152 [2024-07-22 11:03:08.657092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.152 [2024-07-22 11:03:08.657105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.153 [2024-07-22 11:03:08.657135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.657411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.657424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.658971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.658993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.659005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.153 [2024-07-22 11:03:08.659023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.153 [2024-07-22 11:03:08.659036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.659536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.659554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.665976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.665994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:34.154 [2024-07-22 11:03:08.666904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.154 [2024-07-22 11:03:08.666917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.666934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.666947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.666964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.666978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.666996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.667969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.667987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.155 [2024-07-22 11:03:08.668187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:34.155 [2024-07-22 11:03:08.668204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.156 [2024-07-22 11:03:08.668505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.668991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.156 [2024-07-22 11:03:08.669825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:34.156 [2024-07-22 11:03:08.669847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.669859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.669881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.669898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.669919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.669932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.669953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.669966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.669987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.670970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.670992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:08.671367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.157 [2024-07-22 11:03:08.671383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:21.676972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.157 [2024-07-22 11:03:21.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:34.157 [2024-07-22 11:03:21.677063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.677976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.677990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.158 [2024-07-22 11:03:21.678242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.158 [2024-07-22 11:03:21.678255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.159 [2024-07-22 11:03:21.678781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.678981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.678994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.159 [2024-07-22 11:03:21.679399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.159 [2024-07-22 11:03:21.679420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.679975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.679988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.680014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.680040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:34.160 [2024-07-22 11:03:21.680066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110048 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110056 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110064 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110072 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110080 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110088 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110096 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110104 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110112 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110120 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.160 [2024-07-22 11:03:21.680538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.160 [2024-07-22 11:03:21.680546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.160 [2024-07-22 11:03:21.680556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110128 len:8 PRP1 0x0 PRP2 0x0 01:08:34.160 [2024-07-22 11:03:21.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.680602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110136 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.680615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.680646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110144 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.680661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.680691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.680703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.680733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110160 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.680745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.680776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110168 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.680818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110176 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.680830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.680843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.680851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.700403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110184 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.700458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.700488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.700506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.700524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110192 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.700564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.700588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.700605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.700622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110200 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.700645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.700668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.700685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.700703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110208 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.700725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.700748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.700765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.700781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110216 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.700804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.700826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:34.161 [2024-07-22 11:03:21.700843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:34.161 [2024-07-22 11:03:21.700860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110224 len:8 PRP1 0x0 PRP2 0x0 01:08:34.161 [2024-07-22 11:03:21.700882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.700954] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16ae360 was disconnected and freed. reset controller. 01:08:34.161 [2024-07-22 11:03:21.701117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:34.161 [2024-07-22 11:03:21.701147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.701171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:34.161 [2024-07-22 11:03:21.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.701217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:34.161 [2024-07-22 11:03:21.701239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.701282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:34.161 [2024-07-22 11:03:21.701306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.701331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:34.161 [2024-07-22 11:03:21.701354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTE 11:03:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:34.161 D - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:34.161 [2024-07-22 11:03:21.701396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c750 is same with the state(5) to be set 01:08:34.161 [2024-07-22 11:03:21.703608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:34.161 [2024-07-22 11:03:21.703662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c750 (9): Bad file descriptor 01:08:34.161 [2024-07-22 11:03:21.703827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:34.161 [2024-07-22 11:03:21.703862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c750 with addr=10.0.0.2, port=4421 01:08:34.161 [2024-07-22 11:03:21.703887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c750 is same with the state(5) to be set 01:08:34.161 [2024-07-22 11:03:21.703921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c750 (9): Bad file descriptor 01:08:34.161 [2024-07-22 11:03:21.703953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:34.161 [2024-07-22 11:03:21.703975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:34.161 [2024-07-22 11:03:21.703999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:34.161 [2024-07-22 11:03:21.704032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:34.161 [2024-07-22 11:03:21.704051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:34.161 [2024-07-22 11:03:31.762749] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:08:34.161 Received shutdown signal, test time was about 54.368451 seconds 01:08:34.161 01:08:34.161 Latency(us) 01:08:34.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:34.161 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:34.161 Verification LBA range: start 0x0 length 0x4000 01:08:34.161 Nvme0n1 : 54.37 8782.44 34.31 0.00 0.00 14558.05 1039.63 7115156.67 01:08:34.161 =================================================================================================================== 01:08:34.161 Total : 8782.44 34.31 0.00 0.00 14558.05 1039.63 7115156.67 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:34.421 rmmod nvme_tcp 01:08:34.421 rmmod nvme_fabrics 01:08:34.421 rmmod nvme_keyring 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113797 ']' 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113797 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113797 ']' 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113797 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:34.421 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113797 01:08:34.422 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:08:34.422 killing process with pid 113797 01:08:34.422 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:08:34.422 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113797' 01:08:34.422 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113797 01:08:34.422 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113797 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:34.681 01:08:34.681 real 0m59.694s 01:08:34.681 user 2m45.182s 01:08:34.681 sys 0m16.727s 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:34.681 11:03:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:08:34.681 ************************************ 01:08:34.681 END TEST nvmf_host_multipath 01:08:34.681 ************************************ 01:08:34.681 11:03:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:34.681 11:03:42 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:08:34.681 11:03:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:34.681 11:03:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:34.682 11:03:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:34.682 ************************************ 01:08:34.682 START TEST nvmf_timeout 01:08:34.682 ************************************ 01:08:34.682 11:03:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:08:34.942 * Looking for test storage... 01:08:34.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:34.942 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:34.943 Cannot find device "nvmf_tgt_br" 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:34.943 Cannot find device "nvmf_tgt_br2" 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:34.943 Cannot find device "nvmf_tgt_br" 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:34.943 Cannot find device "nvmf_tgt_br2" 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:34.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:08:34.943 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:35.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:35.203 11:03:42 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:35.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:35.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 01:08:35.203 01:08:35.203 --- 10.0.0.2 ping statistics --- 01:08:35.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:35.203 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:35.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:35.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 01:08:35.203 01:08:35.203 --- 10.0.0.3 ping statistics --- 01:08:35.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:35.203 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:35.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:35.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:08:35.203 01:08:35.203 --- 10.0.0.1 ping statistics --- 01:08:35.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:35.203 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:35.203 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=115164 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 115164 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115164 ']' 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:35.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:35.204 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:35.463 [2024-07-22 11:03:43.145129] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:08:35.463 [2024-07-22 11:03:43.145199] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:35.463 [2024-07-22 11:03:43.263654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:08:35.463 [2024-07-22 11:03:43.286820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:08:35.463 [2024-07-22 11:03:43.330528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:35.463 [2024-07-22 11:03:43.330580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:35.463 [2024-07-22 11:03:43.330589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:35.463 [2024-07-22 11:03:43.330597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:35.463 [2024-07-22 11:03:43.330604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:35.463 [2024-07-22 11:03:43.330810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:35.463 [2024-07-22 11:03:43.330811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:36.400 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:36.400 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:08:36.400 11:03:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:36.400 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:36.400 11:03:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:36.400 11:03:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:36.400 11:03:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:08:36.400 11:03:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:08:36.400 [2024-07-22 11:03:44.219298] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:36.400 11:03:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:08:36.659 Malloc0 01:08:36.659 11:03:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:08:36.918 11:03:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:08:37.177 11:03:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:37.437 [2024-07-22 11:03:45.137284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=115255 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 115255 /var/tmp/bdevperf.sock 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115255 ']' 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:37.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:37.437 11:03:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:37.437 [2024-07-22 11:03:45.194237] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:08:37.437 [2024-07-22 11:03:45.194332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115255 ] 01:08:37.437 [2024-07-22 11:03:45.312422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:08:37.437 [2024-07-22 11:03:45.336766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:37.696 [2024-07-22 11:03:45.382213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:08:38.264 11:03:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:38.264 11:03:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:08:38.264 11:03:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:08:38.523 11:03:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:08:38.781 NVMe0n1 01:08:38.781 11:03:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:38.781 11:03:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=115297 01:08:38.781 11:03:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:08:38.781 Running I/O for 10 seconds... 01:08:39.716 11:03:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:39.979 [2024-07-22 11:03:47.742130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.742729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3af90 is same with the state(5) to be set 01:08:39.979 [2024-07-22 11:03:47.743944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.979 [2024-07-22 11:03:47.744105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.979 [2024-07-22 11:03:47.744191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.979 [2024-07-22 11:03:47.744203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.980 [2024-07-22 11:03:47.744874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.744893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.744912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.744931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.744950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.744968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.744986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.980 [2024-07-22 11:03:47.744996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.980 [2024-07-22 11:03:47.745005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.981 [2024-07-22 11:03:47.745023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.981 [2024-07-22 11:03:47.745041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.981 [2024-07-22 11:03:47.745060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.981 [2024-07-22 11:03:47.745825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.981 [2024-07-22 11:03:47.745834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:39.982 [2024-07-22 11:03:47.745964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.745982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.745992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:39.982 [2024-07-22 11:03:47.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:39.982 [2024-07-22 11:03:47.746617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:39.982 [2024-07-22 11:03:47.746625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104776 len:8 PRP1 0x0 PRP2 0x0 01:08:39.982 [2024-07-22 11:03:47.746633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.982 [2024-07-22 11:03:47.746682] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e3dc0 was disconnected and freed. reset controller. 01:08:39.982 [2024-07-22 11:03:47.746768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:39.983 [2024-07-22 11:03:47.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.983 [2024-07-22 11:03:47.746789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:39.983 [2024-07-22 11:03:47.746798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.983 [2024-07-22 11:03:47.746807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:39.983 [2024-07-22 11:03:47.746815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.983 [2024-07-22 11:03:47.746827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:39.983 [2024-07-22 11:03:47.746836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:39.983 [2024-07-22 11:03:47.746844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8650 is same with the state(5) to be set 01:08:39.983 [2024-07-22 11:03:47.747014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:39.983 [2024-07-22 11:03:47.747030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8650 (9): Bad file descriptor 01:08:39.983 [2024-07-22 11:03:47.747103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:39.983 [2024-07-22 11:03:47.747117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8650 with addr=10.0.0.2, port=4420 01:08:39.983 [2024-07-22 11:03:47.747126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8650 is same with the state(5) to be set 01:08:39.983 [2024-07-22 11:03:47.747139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8650 (9): Bad file descriptor 01:08:39.983 [2024-07-22 11:03:47.747156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:39.983 [2024-07-22 11:03:47.747164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:39.983 [2024-07-22 11:03:47.747174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:39.983 [2024-07-22 11:03:47.747189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:39.983 [2024-07-22 11:03:47.747198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:39.983 11:03:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:08:41.906 [2024-07-22 11:03:49.744202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:41.906 [2024-07-22 11:03:49.744403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8650 with addr=10.0.0.2, port=4420 01:08:41.906 [2024-07-22 11:03:49.744593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8650 is same with the state(5) to be set 01:08:41.906 [2024-07-22 11:03:49.744697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8650 (9): Bad file descriptor 01:08:41.906 [2024-07-22 11:03:49.744768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:41.906 [2024-07-22 11:03:49.744856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:41.906 [2024-07-22 11:03:49.744904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:41.906 [2024-07-22 11:03:49.744947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:41.906 [2024-07-22 11:03:49.744977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:41.906 11:03:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:08:41.906 11:03:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:08:41.906 11:03:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:08:42.164 11:03:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:08:42.164 11:03:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:08:42.164 11:03:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:08:42.164 11:03:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:08:42.423 11:03:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:08:42.423 11:03:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:08:44.327 [2024-07-22 11:03:51.741967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:44.327 [2024-07-22 11:03:51.742279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8650 with addr=10.0.0.2, port=4420 01:08:44.327 [2024-07-22 11:03:51.742381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8650 is same with the state(5) to be set 01:08:44.327 [2024-07-22 11:03:51.742448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8650 (9): Bad file descriptor 01:08:44.327 [2024-07-22 11:03:51.742515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:44.327 [2024-07-22 11:03:51.742616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:44.327 [2024-07-22 11:03:51.742667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:44.327 [2024-07-22 11:03:51.742711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:44.327 [2024-07-22 11:03:51.742741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:46.238 [2024-07-22 11:03:53.739609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:46.238 [2024-07-22 11:03:53.740071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:46.238 [2024-07-22 11:03:53.740263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:46.238 [2024-07-22 11:03:53.740289] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 01:08:46.238 [2024-07-22 11:03:53.740321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:46.807 01:08:46.807 Latency(us) 01:08:46.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:46.807 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:46.807 Verification LBA range: start 0x0 length 0x4000 01:08:46.807 NVMe0n1 : 8.13 1596.27 6.24 15.75 0.00 79465.27 1579.18 7061253.96 01:08:46.807 =================================================================================================================== 01:08:46.807 Total : 1596.27 6.24 15.75 0.00 79465.27 1579.18 7061253.96 01:08:47.067 0 01:08:47.326 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:08:47.327 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:08:47.327 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:08:47.586 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:08:47.586 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:08:47.586 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:08:47.586 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 115297 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 115255 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115255 ']' 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115255 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115255 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:08:47.846 killing process with pid 115255 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115255' 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115255 01:08:47.846 Received shutdown signal, test time was about 9.128768 seconds 01:08:47.846 01:08:47.846 Latency(us) 01:08:47.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:47.846 =================================================================================================================== 01:08:47.846 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:47.846 11:03:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115255 01:08:48.106 11:03:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:48.365 [2024-07-22 11:03:56.100976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=115449 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 115449 /var/tmp/bdevperf.sock 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115449 ']' 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:48.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:48.365 11:03:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:48.365 [2024-07-22 11:03:56.161559] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:08:48.365 [2024-07-22 11:03:56.161635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115449 ] 01:08:48.365 [2024-07-22 11:03:56.281234] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:08:48.629 [2024-07-22 11:03:56.307206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:48.629 [2024-07-22 11:03:56.355970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:08:49.576 11:03:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:49.576 11:03:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:08:49.576 11:03:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:08:49.576 11:03:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:08:49.835 NVMe0n1 01:08:49.835 11:03:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=115497 01:08:49.835 11:03:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:49.835 11:03:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:08:50.094 Running I/O for 10 seconds... 01:08:51.031 11:03:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:51.031 [2024-07-22 11:03:58.901375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4a10 is same with the state(5) to be set 01:08:51.031 [2024-07-22 11:03:58.901420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4a10 is same with the state(5) to be set 01:08:51.031 [2024-07-22 11:03:58.901430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4a10 is same with the state(5) to be set 01:08:51.031 [2024-07-22 11:03:58.901439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4a10 is same with the state(5) to be set 01:08:51.031 [2024-07-22 11:03:58.901447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4a10 is same with the state(5) to be set 01:08:51.031 [2024-07-22 11:03:58.901455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4a10 is same with the state(5) to be set 01:08:51.031 [2024-07-22 11:03:58.901661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.901990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.901999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.031 [2024-07-22 11:03:58.902177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.031 [2024-07-22 11:03:58.902197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.031 [2024-07-22 11:03:58.902216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.031 [2024-07-22 11:03:58.902236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.031 [2024-07-22 11:03:58.902255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.031 [2024-07-22 11:03:58.902266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.032 [2024-07-22 11:03:58.902367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.032 [2024-07-22 11:03:58.902701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.902969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.902983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.032 [2024-07-22 11:03:58.903138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.032 [2024-07-22 11:03:58.903149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.903983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.903992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.904003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.904012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.033 [2024-07-22 11:03:58.904022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.033 [2024-07-22 11:03:58.904031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:51.034 [2024-07-22 11:03:58.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.034 [2024-07-22 11:03:58.904210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.034 [2024-07-22 11:03:58.904229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.034 [2024-07-22 11:03:58.904249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.034 [2024-07-22 11:03:58.904279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.034 [2024-07-22 11:03:58.904299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:51.034 [2024-07-22 11:03:58.904320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff0dc0 is same with the state(5) to be set 01:08:51.034 [2024-07-22 11:03:58.904341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:51.034 [2024-07-22 11:03:58.904349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:51.034 [2024-07-22 11:03:58.904357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 PRP1 0x0 PRP2 0x0 01:08:51.034 [2024-07-22 11:03:58.904366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:51.034 [2024-07-22 11:03:58.904412] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xff0dc0 was disconnected and freed. reset controller. 01:08:51.034 [2024-07-22 11:03:58.904629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:51.034 [2024-07-22 11:03:58.904693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:08:51.034 [2024-07-22 11:03:58.904772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:51.034 [2024-07-22 11:03:58.904786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff5650 with addr=10.0.0.2, port=4420 01:08:51.034 [2024-07-22 11:03:58.904796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:08:51.034 [2024-07-22 11:03:58.904811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:08:51.034 [2024-07-22 11:03:58.904824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:51.034 [2024-07-22 11:03:58.904833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:51.034 [2024-07-22 11:03:58.904843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:51.034 [2024-07-22 11:03:58.904860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:51.034 [2024-07-22 11:03:58.904879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:51.034 11:03:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:08:52.411 [2024-07-22 11:03:59.903373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:52.411 [2024-07-22 11:03:59.903435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff5650 with addr=10.0.0.2, port=4420 01:08:52.411 [2024-07-22 11:03:59.903449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:08:52.411 [2024-07-22 11:03:59.903470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:08:52.411 [2024-07-22 11:03:59.903485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:08:52.411 [2024-07-22 11:03:59.903494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:08:52.411 [2024-07-22 11:03:59.903504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:08:52.411 [2024-07-22 11:03:59.903525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:08:52.411 [2024-07-22 11:03:59.903534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:52.411 11:03:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:52.411 [2024-07-22 11:04:00.119433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:52.411 11:04:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 115497 01:08:53.350 [2024-07-22 11:04:00.913317] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:08:59.918 01:08:59.918 Latency(us) 01:08:59.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:59.918 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:59.918 Verification LBA range: start 0x0 length 0x4000 01:08:59.918 NVMe0n1 : 10.02 8029.88 31.37 0.00 0.00 15911.91 1500.22 3018551.31 01:08:59.918 =================================================================================================================== 01:08:59.918 Total : 8029.88 31.37 0.00 0.00 15911.91 1500.22 3018551.31 01:08:59.918 0 01:08:59.918 11:04:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=115614 01:08:59.918 11:04:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:08:59.918 11:04:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:00.176 Running I/O for 10 seconds... 01:09:01.113 11:04:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:01.113 [2024-07-22 11:04:08.992909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.992960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.992970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.992979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.992987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.992995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.113 [2024-07-22 11:04:08.993152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3bb40 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:09:01.114 [2024-07-22 11:04:08.993776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:09:01.114 [2024-07-22 11:04:08.993798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:09:01.114 [2024-07-22 11:04:08.993816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:09:01.114 [2024-07-22 11:04:08.993834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:09:01.114 [2024-07-22 11:04:08.993885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.993895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.993919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.993938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.993956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.993974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.993984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.993992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.114 [2024-07-22 11:04:08.994246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.114 [2024-07-22 11:04:08.994425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.114 [2024-07-22 11:04:08.994436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.994984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.994992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.115 [2024-07-22 11:04:08.995202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.115 [2024-07-22 11:04:08.995211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.116 [2024-07-22 11:04:08.995947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.116 [2024-07-22 11:04:08.995983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.116 [2024-07-22 11:04:08.995994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.117 [2024-07-22 11:04:08.996002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.117 [2024-07-22 11:04:08.996020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.117 [2024-07-22 11:04:08.996038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.117 [2024-07-22 11:04:08.996057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.117 [2024-07-22 11:04:08.996076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:01.117 [2024-07-22 11:04:08.996097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:01.117 [2024-07-22 11:04:08.996223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:09:01.117 [2024-07-22 11:04:08.996250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:09:01.117 [2024-07-22 11:04:08.996258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 01:09:01.117 [2024-07-22 11:04:08.996273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:01.117 [2024-07-22 11:04:08.996317] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10233d0 was disconnected and freed. reset controller. 01:09:01.117 [2024-07-22 11:04:08.996501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:01.117 [2024-07-22 11:04:08.996519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:09:01.117 [2024-07-22 11:04:08.996599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:01.117 [2024-07-22 11:04:08.996614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff5650 with addr=10.0.0.2, port=4420 01:09:01.117 [2024-07-22 11:04:08.996623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:09:01.117 [2024-07-22 11:04:08.996637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:09:01.117 [2024-07-22 11:04:08.996650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:01.117 [2024-07-22 11:04:08.996660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:01.117 [2024-07-22 11:04:08.996670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:01.117 [2024-07-22 11:04:08.996685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:01.117 [2024-07-22 11:04:08.996694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:01.117 11:04:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:09:02.516 [2024-07-22 11:04:10.011961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:02.516 [2024-07-22 11:04:10.012027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff5650 with addr=10.0.0.2, port=4420 01:09:02.516 [2024-07-22 11:04:10.012041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:09:02.516 [2024-07-22 11:04:10.012064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:09:02.516 [2024-07-22 11:04:10.012080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:02.517 [2024-07-22 11:04:10.012089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:02.517 [2024-07-22 11:04:10.012099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:02.517 [2024-07-22 11:04:10.012122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:02.517 [2024-07-22 11:04:10.012131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:03.085 [2024-07-22 11:04:11.010640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:03.085 [2024-07-22 11:04:11.010702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff5650 with addr=10.0.0.2, port=4420 01:09:03.085 [2024-07-22 11:04:11.010717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:09:03.085 [2024-07-22 11:04:11.010739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:09:03.085 [2024-07-22 11:04:11.010754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:03.085 [2024-07-22 11:04:11.010763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:03.085 [2024-07-22 11:04:11.010773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:03.085 [2024-07-22 11:04:11.010794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:03.085 [2024-07-22 11:04:11.010803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:04.461 [2024-07-22 11:04:12.009506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:04.461 [2024-07-22 11:04:12.009562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff5650 with addr=10.0.0.2, port=4420 01:09:04.461 [2024-07-22 11:04:12.009575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff5650 is same with the state(5) to be set 01:09:04.461 [2024-07-22 11:04:12.009764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff5650 (9): Bad file descriptor 01:09:04.461 [2024-07-22 11:04:12.009943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:04.461 [2024-07-22 11:04:12.009959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:04.461 [2024-07-22 11:04:12.009970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:04.461 [2024-07-22 11:04:12.012679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:04.461 [2024-07-22 11:04:12.012710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:04.462 11:04:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:04.462 [2024-07-22 11:04:12.234045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:04.462 11:04:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 115614 01:09:05.398 [2024-07-22 11:04:13.041580] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:09:10.693 01:09:10.693 Latency(us) 01:09:10.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:10.693 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:09:10.693 Verification LBA range: start 0x0 length 0x4000 01:09:10.693 NVMe0n1 : 10.00 7031.36 27.47 5250.03 0.00 10406.18 457.30 3018551.31 01:09:10.693 =================================================================================================================== 01:09:10.693 Total : 7031.36 27.47 5250.03 0.00 10406.18 0.00 3018551.31 01:09:10.693 0 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 115449 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115449 ']' 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115449 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115449 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115449' 01:09:10.693 killing process with pid 115449 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115449 01:09:10.693 Received shutdown signal, test time was about 10.000000 seconds 01:09:10.693 01:09:10.693 Latency(us) 01:09:10.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:10.693 =================================================================================================================== 01:09:10.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:09:10.693 11:04:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115449 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115739 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115739 /var/tmp/bdevperf.sock 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115739 ']' 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:10.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:10.693 11:04:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:09:10.693 [2024-07-22 11:04:18.152700] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:09:10.693 [2024-07-22 11:04:18.152764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115739 ] 01:09:10.693 [2024-07-22 11:04:18.270096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:09:10.693 [2024-07-22 11:04:18.292848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:10.693 [2024-07-22 11:04:18.335392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:09:11.260 11:04:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:11.260 11:04:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:09:11.260 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115739 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:09:11.260 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115763 01:09:11.260 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:09:11.518 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:09:11.777 NVMe0n1 01:09:11.777 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=115816 01:09:11.777 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:11.777 11:04:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:09:11.777 Running I/O for 10 seconds... 01:09:12.714 11:04:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:12.977 [2024-07-22 11:04:20.774490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.774997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.775004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed00 is same with the state(5) to be set 01:09:12.977 [2024-07-22 11:04:20.775369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.977 [2024-07-22 11:04:20.775398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.977 [2024-07-22 11:04:20.775416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.977 [2024-07-22 11:04:20.775425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.977 [2024-07-22 11:04:20.775437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.775984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.775994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.978 [2024-07-22 11:04:20.776219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.978 [2024-07-22 11:04:20.776227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.776981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.776990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.777000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.777009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.777018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.979 [2024-07-22 11:04:20.777027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.979 [2024-07-22 11:04:20.777037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:12.980 [2024-07-22 11:04:20.777817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:09:12.980 [2024-07-22 11:04:20.777848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:09:12.980 [2024-07-22 11:04:20.777855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89992 len:8 PRP1 0x0 PRP2 0x0 01:09:12.980 [2024-07-22 11:04:20.777864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:12.980 [2024-07-22 11:04:20.777909] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d1bdc0 was disconnected and freed. reset controller. 01:09:12.981 [2024-07-22 11:04:20.778143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:12.981 [2024-07-22 11:04:20.778207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20650 (9): Bad file descriptor 01:09:12.981 [2024-07-22 11:04:20.778303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:12.981 [2024-07-22 11:04:20.778326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20650 with addr=10.0.0.2, port=4420 01:09:12.981 [2024-07-22 11:04:20.778339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20650 is same with the state(5) to be set 01:09:12.981 [2024-07-22 11:04:20.778362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20650 (9): Bad file descriptor 01:09:12.981 [2024-07-22 11:04:20.778376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:12.981 [2024-07-22 11:04:20.778384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:12.981 [2024-07-22 11:04:20.778394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:12.981 [2024-07-22 11:04:20.778412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:12.981 [2024-07-22 11:04:20.778421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:12.981 11:04:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 115816 01:09:14.883 [2024-07-22 11:04:22.775345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:14.883 [2024-07-22 11:04:22.775405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20650 with addr=10.0.0.2, port=4420 01:09:14.883 [2024-07-22 11:04:22.775419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20650 is same with the state(5) to be set 01:09:14.883 [2024-07-22 11:04:22.775441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20650 (9): Bad file descriptor 01:09:14.883 [2024-07-22 11:04:22.775467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:14.883 [2024-07-22 11:04:22.775476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:14.883 [2024-07-22 11:04:22.775487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:14.883 [2024-07-22 11:04:22.775508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:14.883 [2024-07-22 11:04:22.775517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:17.415 [2024-07-22 11:04:24.772458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:17.415 [2024-07-22 11:04:24.772516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20650 with addr=10.0.0.2, port=4420 01:09:17.415 [2024-07-22 11:04:24.772530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20650 is same with the state(5) to be set 01:09:17.415 [2024-07-22 11:04:24.772552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20650 (9): Bad file descriptor 01:09:17.415 [2024-07-22 11:04:24.772568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:17.415 [2024-07-22 11:04:24.772578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:17.415 [2024-07-22 11:04:24.772588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:17.415 [2024-07-22 11:04:24.772611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:17.415 [2024-07-22 11:04:24.772620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:09:19.319 [2024-07-22 11:04:26.769450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:09:19.319 [2024-07-22 11:04:26.769492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:09:19.319 [2024-07-22 11:04:26.769503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:09:19.319 [2024-07-22 11:04:26.769512] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 01:09:19.319 [2024-07-22 11:04:26.769534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:09:19.885 01:09:19.885 Latency(us) 01:09:19.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:19.886 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:09:19.886 NVMe0n1 : 8.16 3357.70 13.12 15.69 0.00 37902.26 1855.54 7061253.96 01:09:19.886 =================================================================================================================== 01:09:19.886 Total : 3357.70 13.12 15.69 0.00 37902.26 1855.54 7061253.96 01:09:19.886 0 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:09:19.886 Attaching 5 probes... 01:09:19.886 1142.016699: reset bdev controller NVMe0 01:09:19.886 1142.129461: reconnect bdev controller NVMe0 01:09:19.886 3139.107251: reconnect delay bdev controller NVMe0 01:09:19.886 3139.129337: reconnect bdev controller NVMe0 01:09:19.886 5136.220689: reconnect delay bdev controller NVMe0 01:09:19.886 5136.244370: reconnect bdev controller NVMe0 01:09:19.886 7133.318121: reconnect delay bdev controller NVMe0 01:09:19.886 7133.334258: reconnect bdev controller NVMe0 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115763 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115739 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115739 ']' 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115739 01:09:19.886 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115739 01:09:20.145 killing process with pid 115739 01:09:20.145 Received shutdown signal, test time was about 8.243067 seconds 01:09:20.145 01:09:20.145 Latency(us) 01:09:20.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:20.145 =================================================================================================================== 01:09:20.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115739' 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115739 01:09:20.145 11:04:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115739 01:09:20.145 11:04:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:09:20.404 rmmod nvme_tcp 01:09:20.404 rmmod nvme_fabrics 01:09:20.404 rmmod nvme_keyring 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 115164 ']' 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 115164 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115164 ']' 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115164 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:20.404 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115164 01:09:20.663 killing process with pid 115164 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115164' 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115164 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115164 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:20.663 11:04:28 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:09:20.922 01:09:20.922 real 0m46.054s 01:09:20.922 user 2m14.033s 01:09:20.922 sys 0m5.869s 01:09:20.922 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:20.922 ************************************ 01:09:20.922 END TEST nvmf_timeout 01:09:20.922 ************************************ 01:09:20.922 11:04:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:09:20.922 11:04:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:09:20.922 11:04:28 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 01:09:20.922 11:04:28 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 01:09:20.922 11:04:28 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:20.922 11:04:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:20.922 11:04:28 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 01:09:20.922 01:09:20.922 real 20m49.146s 01:09:20.922 user 61m7.205s 01:09:20.922 sys 5m17.323s 01:09:20.922 11:04:28 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:20.922 11:04:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:20.922 ************************************ 01:09:20.922 END TEST nvmf_tcp 01:09:20.922 ************************************ 01:09:20.922 11:04:28 -- common/autotest_common.sh@1142 -- # return 0 01:09:20.922 11:04:28 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 01:09:20.922 11:04:28 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:09:20.922 11:04:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:09:20.922 11:04:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:20.922 11:04:28 -- common/autotest_common.sh@10 -- # set +x 01:09:20.922 ************************************ 01:09:20.922 START TEST spdkcli_nvmf_tcp 01:09:20.922 ************************************ 01:09:20.922 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:09:21.181 * Looking for test storage... 01:09:21.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:21.181 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=116029 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 116029 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 116029 ']' 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:21.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:21.182 11:04:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:21.182 [2024-07-22 11:04:28.970812] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:09:21.182 [2024-07-22 11:04:28.971428] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116029 ] 01:09:21.182 [2024-07-22 11:04:29.090584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:09:21.182 [2024-07-22 11:04:29.104820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:09:21.441 [2024-07-22 11:04:29.148698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:09:21.441 [2024-07-22 11:04:29.148706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:22.009 11:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 01:09:22.009 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 01:09:22.009 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 01:09:22.009 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 01:09:22.009 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 01:09:22.009 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 01:09:22.009 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 01:09:22.009 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:09:22.009 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:09:22.009 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 01:09:22.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 01:09:22.009 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 01:09:22.009 ' 01:09:25.311 [2024-07-22 11:04:32.551662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:26.247 [2024-07-22 11:04:33.850837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 01:09:28.783 [2024-07-22 11:04:36.284920] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 01:09:30.687 [2024-07-22 11:04:38.399012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 01:09:32.062 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 01:09:32.063 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 01:09:32.063 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 01:09:32.063 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 01:09:32.063 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 01:09:32.063 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 01:09:32.063 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 01:09:32.063 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:09:32.063 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:09:32.063 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 01:09:32.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 01:09:32.063 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 01:09:32.321 11:04:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:32.886 11:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 01:09:32.886 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 01:09:32.886 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:09:32.886 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 01:09:32.886 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 01:09:32.886 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 01:09:32.886 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 01:09:32.886 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:09:32.886 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 01:09:32.886 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 01:09:32.886 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 01:09:32.886 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 01:09:32.886 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 01:09:32.886 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 01:09:32.886 ' 01:09:39.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 01:09:39.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 01:09:39.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 01:09:39.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 01:09:39.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 01:09:39.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 01:09:39.459 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 01:09:39.459 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 01:09:39.459 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 01:09:39.459 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 01:09:39.459 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 01:09:39.459 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 01:09:39.459 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 01:09:39.459 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 116029 ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:09:39.459 killing process with pid 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116029' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 116029 ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 116029 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 116029 ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 116029 01:09:39.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (116029) - No such process 01:09:39.459 Process with pid 116029 is not found 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 116029 is not found' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:09:39.459 01:09:39.459 real 0m17.737s 01:09:39.459 user 0m38.772s 01:09:39.459 sys 0m1.098s 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:39.459 11:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:39.459 ************************************ 01:09:39.459 END TEST spdkcli_nvmf_tcp 01:09:39.459 ************************************ 01:09:39.459 11:04:46 -- common/autotest_common.sh@1142 -- # return 0 01:09:39.459 11:04:46 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:09:39.459 11:04:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:09:39.459 11:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:39.459 11:04:46 -- common/autotest_common.sh@10 -- # set +x 01:09:39.459 ************************************ 01:09:39.459 START TEST nvmf_identify_passthru 01:09:39.459 ************************************ 01:09:39.459 11:04:46 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:09:39.459 * Looking for test storage... 01:09:39.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:39.459 11:04:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:39.459 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:39.460 11:04:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 01:09:39.460 11:04:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:39.460 11:04:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:09:39.460 11:04:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:39.460 11:04:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:39.460 11:04:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:09:39.460 11:04:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:09:39.460 Cannot find device "nvmf_tgt_br" 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:09:39.460 Cannot find device "nvmf_tgt_br2" 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:09:39.460 Cannot find device "nvmf_tgt_br" 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:09:39.460 Cannot find device "nvmf_tgt_br2" 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:39.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:39.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:39.460 11:04:46 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:09:39.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:39.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 01:09:39.460 01:09:39.460 --- 10.0.0.2 ping statistics --- 01:09:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:39.460 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:09:39.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:39.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 01:09:39.460 01:09:39.460 --- 10.0.0.3 ping statistics --- 01:09:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:39.460 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:39.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:39.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:09:39.460 01:09:39.460 --- 10.0.0.1 ping statistics --- 01:09:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:39.460 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:09:39.460 11:04:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:09:39.460 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 01:09:39.460 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 01:09:39.718 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 01:09:39.718 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 01:09:39.718 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:09:39.718 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:39.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=116526 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:39.977 11:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 116526 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 116526 ']' 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:39.977 11:04:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:39.977 [2024-07-22 11:04:47.786917] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:09:39.977 [2024-07-22 11:04:47.786982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:40.236 [2024-07-22 11:04:47.908420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:09:40.236 [2024-07-22 11:04:47.933481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:09:40.236 [2024-07-22 11:04:47.980525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:40.236 [2024-07-22 11:04:47.980816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:40.236 [2024-07-22 11:04:47.981015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:40.236 [2024-07-22 11:04:47.981068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:40.236 [2024-07-22 11:04:47.981148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:40.236 [2024-07-22 11:04:47.981380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:09:40.236 [2024-07-22 11:04:47.981567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:09:40.236 [2024-07-22 11:04:47.981719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:09:40.236 [2024-07-22 11:04:47.981722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 01:09:40.801 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:40.801 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:40.801 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:40.801 [2024-07-22 11:04:48.730351] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 [2024-07-22 11:04:48.743726] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 Nvme0n1 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 [2024-07-22 11:04:48.910184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.060 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.060 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.060 [ 01:09:41.060 { 01:09:41.060 "allow_any_host": true, 01:09:41.060 "hosts": [], 01:09:41.060 "listen_addresses": [], 01:09:41.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:09:41.061 "subtype": "Discovery" 01:09:41.061 }, 01:09:41.061 { 01:09:41.061 "allow_any_host": true, 01:09:41.061 "hosts": [], 01:09:41.061 "listen_addresses": [ 01:09:41.061 { 01:09:41.061 "adrfam": "IPv4", 01:09:41.061 "traddr": "10.0.0.2", 01:09:41.061 "trsvcid": "4420", 01:09:41.061 "trtype": "TCP" 01:09:41.061 } 01:09:41.061 ], 01:09:41.061 "max_cntlid": 65519, 01:09:41.061 "max_namespaces": 1, 01:09:41.061 "min_cntlid": 1, 01:09:41.061 "model_number": "SPDK bdev Controller", 01:09:41.061 "namespaces": [ 01:09:41.061 { 01:09:41.061 "bdev_name": "Nvme0n1", 01:09:41.061 "name": "Nvme0n1", 01:09:41.061 "nguid": "9A2C20BA428847368E9A17C57DDD6A2F", 01:09:41.061 "nsid": 1, 01:09:41.061 "uuid": "9a2c20ba-4288-4736-8e9a-17c57ddd6a2f" 01:09:41.061 } 01:09:41.061 ], 01:09:41.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:09:41.061 "serial_number": "SPDK00000000000001", 01:09:41.061 "subtype": "NVMe" 01:09:41.061 } 01:09:41.061 ] 01:09:41.061 11:04:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.061 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:09:41.061 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 01:09:41.061 11:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 01:09:41.320 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 01:09:41.320 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:09:41.320 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 01:09:41.320 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 01:09:41.578 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 01:09:41.578 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 01:09:41.578 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 01:09:41.578 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:41.578 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.578 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:41.578 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.578 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 01:09:41.578 11:04:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 01:09:41.578 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 01:09:41.578 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 01:09:41.578 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:09:41.578 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 01:09:41.578 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 01:09:41.578 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:09:41.578 rmmod nvme_tcp 01:09:41.578 rmmod nvme_fabrics 01:09:41.835 rmmod nvme_keyring 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 116526 ']' 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 116526 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 116526 ']' 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 116526 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116526 01:09:41.835 killing process with pid 116526 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116526' 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 116526 01:09:41.835 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 116526 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:09:41.835 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:09:42.092 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:09:42.092 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:09:42.092 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 01:09:42.092 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:42.092 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:09:42.092 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:42.092 11:04:49 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:09:42.092 ************************************ 01:09:42.092 END TEST nvmf_identify_passthru 01:09:42.092 ************************************ 01:09:42.092 01:09:42.092 real 0m3.230s 01:09:42.092 user 0m7.520s 01:09:42.092 sys 0m1.020s 01:09:42.092 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:42.092 11:04:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:09:42.092 11:04:49 -- common/autotest_common.sh@1142 -- # return 0 01:09:42.092 11:04:49 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:09:42.092 11:04:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:09:42.092 11:04:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:42.092 11:04:49 -- common/autotest_common.sh@10 -- # set +x 01:09:42.092 ************************************ 01:09:42.092 START TEST nvmf_dif 01:09:42.092 ************************************ 01:09:42.092 11:04:49 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:09:42.092 * Looking for test storage... 01:09:42.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:42.092 11:04:49 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:42.092 11:04:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:09:42.092 11:04:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:42.092 11:04:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:42.092 11:04:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:42.092 11:04:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:42.092 11:04:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:42.093 11:04:49 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:42.093 11:04:49 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:42.093 11:04:49 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:42.093 11:04:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:42.093 11:04:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:42.093 11:04:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:42.093 11:04:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:09:42.093 11:04:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@47 -- # : 0 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 01:09:42.093 11:04:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:09:42.093 11:04:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:09:42.093 11:04:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:09:42.093 11:04:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:09:42.093 11:04:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:42.093 11:04:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:09:42.093 11:04:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:09:42.093 11:04:49 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:09:42.093 Cannot find device "nvmf_tgt_br" 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@155 -- # true 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:09:42.093 Cannot find device "nvmf_tgt_br2" 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@156 -- # true 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:09:42.093 Cannot find device "nvmf_tgt_br" 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@158 -- # true 01:09:42.093 11:04:50 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:09:42.350 Cannot find device "nvmf_tgt_br2" 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@159 -- # true 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:42.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@162 -- # true 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:42.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@163 -- # true 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:09:42.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:42.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 01:09:42.350 01:09:42.350 --- 10.0.0.2 ping statistics --- 01:09:42.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:42.350 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:09:42.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:42.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 01:09:42.350 01:09:42.350 --- 10.0.0.3 ping statistics --- 01:09:42.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:42.350 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:42.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:42.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:09:42.350 01:09:42.350 --- 10.0.0.1 ping statistics --- 01:09:42.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:42.350 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@433 -- # return 0 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:09:42.350 11:04:50 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:09:42.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:09:42.915 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:09:42.915 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:09:42.915 11:04:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:09:42.915 11:04:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=116871 01:09:42.915 11:04:50 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 116871 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 116871 ']' 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:42.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:42.915 11:04:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:09:42.915 [2024-07-22 11:04:50.819447] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:09:42.915 [2024-07-22 11:04:50.819520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:43.173 [2024-07-22 11:04:50.938058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:09:43.173 [2024-07-22 11:04:50.963493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:43.173 [2024-07-22 11:04:51.004655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:43.173 [2024-07-22 11:04:51.004704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:43.173 [2024-07-22 11:04:51.004713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:43.173 [2024-07-22 11:04:51.004721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:43.173 [2024-07-22 11:04:51.004743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:43.173 [2024-07-22 11:04:51.004768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:43.738 11:04:51 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:43.738 11:04:51 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 01:09:43.738 11:04:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:09:43.738 11:04:51 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:43.738 11:04:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 11:04:51 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:43.996 11:04:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:09:43.996 11:04:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:09:43.996 11:04:51 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:43.996 11:04:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 [2024-07-22 11:04:51.733567] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:43.996 11:04:51 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:43.996 11:04:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:09:43.996 11:04:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:09:43.996 11:04:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:43.996 11:04:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 ************************************ 01:09:43.996 START TEST fio_dif_1_default 01:09:43.996 ************************************ 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 bdev_null0 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:43.996 [2024-07-22 11:04:51.797590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:43.996 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:43.997 { 01:09:43.997 "params": { 01:09:43.997 "name": "Nvme$subsystem", 01:09:43.997 "trtype": "$TEST_TRANSPORT", 01:09:43.997 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:43.997 "adrfam": "ipv4", 01:09:43.997 "trsvcid": "$NVMF_PORT", 01:09:43.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:43.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:43.997 "hdgst": ${hdgst:-false}, 01:09:43.997 "ddgst": ${ddgst:-false} 01:09:43.997 }, 01:09:43.997 "method": "bdev_nvme_attach_controller" 01:09:43.997 } 01:09:43.997 EOF 01:09:43.997 )") 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:09:43.997 "params": { 01:09:43.997 "name": "Nvme0", 01:09:43.997 "trtype": "tcp", 01:09:43.997 "traddr": "10.0.0.2", 01:09:43.997 "adrfam": "ipv4", 01:09:43.997 "trsvcid": "4420", 01:09:43.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:09:43.997 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:09:43.997 "hdgst": false, 01:09:43.997 "ddgst": false 01:09:43.997 }, 01:09:43.997 "method": "bdev_nvme_attach_controller" 01:09:43.997 }' 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:09:43.997 11:04:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:09:44.253 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:09:44.253 fio-3.35 01:09:44.253 Starting 1 thread 01:09:56.454 01:09:56.454 filename0: (groupid=0, jobs=1): err= 0: pid=116956: Mon Jul 22 11:05:02 2024 01:09:56.454 read: IOPS=878, BW=3516KiB/s (3600kB/s)(34.4MiB/10012msec) 01:09:56.454 slat (nsec): min=5345, max=28508, avg=6054.17, stdev=1208.76 01:09:56.454 clat (usec): min=304, max=41977, avg=4534.08, stdev=12313.99 01:09:56.454 lat (usec): min=310, max=41985, avg=4540.13, stdev=12313.98 01:09:56.454 clat percentiles (usec): 01:09:56.454 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 330], 20.00th=[ 338], 01:09:56.454 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 01:09:56.454 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[40633], 95.00th=[40633], 01:09:56.454 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:09:56.454 | 99.99th=[42206] 01:09:56.454 bw ( KiB/s): min= 2240, max= 4992, per=100.00%, avg=3518.40, stdev=787.50, samples=20 01:09:56.454 iops : min= 560, max= 1248, avg=879.60, stdev=196.88, samples=20 01:09:56.454 lat (usec) : 500=89.36%, 750=0.23% 01:09:56.454 lat (msec) : 4=0.03%, 10=0.01%, 50=10.36% 01:09:56.454 cpu : usr=84.57%, sys=14.92%, ctx=23, majf=0, minf=0 01:09:56.454 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:09:56.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:56.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:56.454 issued rwts: total=8800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:56.454 latency : target=0, window=0, percentile=100.00%, depth=4 01:09:56.454 01:09:56.454 Run status group 0 (all jobs): 01:09:56.454 READ: bw=3516KiB/s (3600kB/s), 3516KiB/s-3516KiB/s (3600kB/s-3600kB/s), io=34.4MiB (36.0MB), run=10012-10012msec 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 ************************************ 01:09:56.454 END TEST fio_dif_1_default 01:09:56.454 ************************************ 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 01:09:56.454 real 0m10.951s 01:09:56.454 user 0m9.048s 01:09:56.454 sys 0m1.797s 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:09:56.454 11:05:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:09:56.454 11:05:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:09:56.454 11:05:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 ************************************ 01:09:56.454 START TEST fio_dif_1_multi_subsystems 01:09:56.454 ************************************ 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 bdev_null0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 [2024-07-22 11:05:02.814022] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 bdev_null1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:56.454 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:56.455 { 01:09:56.455 "params": { 01:09:56.455 "name": "Nvme$subsystem", 01:09:56.455 "trtype": "$TEST_TRANSPORT", 01:09:56.455 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:56.455 "adrfam": "ipv4", 01:09:56.455 "trsvcid": "$NVMF_PORT", 01:09:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:56.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:56.455 "hdgst": ${hdgst:-false}, 01:09:56.455 "ddgst": ${ddgst:-false} 01:09:56.455 }, 01:09:56.455 "method": "bdev_nvme_attach_controller" 01:09:56.455 } 01:09:56.455 EOF 01:09:56.455 )") 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:56.455 { 01:09:56.455 "params": { 01:09:56.455 "name": "Nvme$subsystem", 01:09:56.455 "trtype": "$TEST_TRANSPORT", 01:09:56.455 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:56.455 "adrfam": "ipv4", 01:09:56.455 "trsvcid": "$NVMF_PORT", 01:09:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:56.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:56.455 "hdgst": ${hdgst:-false}, 01:09:56.455 "ddgst": ${ddgst:-false} 01:09:56.455 }, 01:09:56.455 "method": "bdev_nvme_attach_controller" 01:09:56.455 } 01:09:56.455 EOF 01:09:56.455 )") 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:09:56.455 "params": { 01:09:56.455 "name": "Nvme0", 01:09:56.455 "trtype": "tcp", 01:09:56.455 "traddr": "10.0.0.2", 01:09:56.455 "adrfam": "ipv4", 01:09:56.455 "trsvcid": "4420", 01:09:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:09:56.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:09:56.455 "hdgst": false, 01:09:56.455 "ddgst": false 01:09:56.455 }, 01:09:56.455 "method": "bdev_nvme_attach_controller" 01:09:56.455 },{ 01:09:56.455 "params": { 01:09:56.455 "name": "Nvme1", 01:09:56.455 "trtype": "tcp", 01:09:56.455 "traddr": "10.0.0.2", 01:09:56.455 "adrfam": "ipv4", 01:09:56.455 "trsvcid": "4420", 01:09:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:09:56.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:09:56.455 "hdgst": false, 01:09:56.455 "ddgst": false 01:09:56.455 }, 01:09:56.455 "method": "bdev_nvme_attach_controller" 01:09:56.455 }' 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:09:56.455 11:05:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:09:56.455 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:09:56.455 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:09:56.455 fio-3.35 01:09:56.455 Starting 2 threads 01:10:06.427 01:10:06.427 filename0: (groupid=0, jobs=1): err= 0: pid=117115: Mon Jul 22 11:05:13 2024 01:10:06.427 read: IOPS=210, BW=841KiB/s (861kB/s)(8432KiB/10029msec) 01:10:06.427 slat (nsec): min=5831, max=63087, avg=8924.62, stdev=6184.22 01:10:06.427 clat (usec): min=324, max=42508, avg=19002.05, stdev=20176.06 01:10:06.427 lat (usec): min=330, max=42516, avg=19010.98, stdev=20175.52 01:10:06.427 clat percentiles (usec): 01:10:06.427 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 367], 01:10:06.427 | 30.00th=[ 408], 40.00th=[ 433], 50.00th=[ 586], 60.00th=[40633], 01:10:06.427 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:10:06.427 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 01:10:06.427 | 99.99th=[42730] 01:10:06.427 bw ( KiB/s): min= 576, max= 2304, per=50.20%, avg=841.60, stdev=369.42, samples=20 01:10:06.427 iops : min= 144, max= 576, avg=210.40, stdev=92.36, samples=20 01:10:06.427 lat (usec) : 500=49.19%, 750=4.08%, 1000=0.62% 01:10:06.427 lat (msec) : 2=0.19%, 50=45.92% 01:10:06.427 cpu : usr=93.81%, sys=5.74%, ctx=80, majf=0, minf=9 01:10:06.427 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:06.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:06.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:06.427 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:06.427 latency : target=0, window=0, percentile=100.00%, depth=4 01:10:06.427 filename1: (groupid=0, jobs=1): err= 0: pid=117116: Mon Jul 22 11:05:13 2024 01:10:06.427 read: IOPS=208, BW=835KiB/s (855kB/s)(8368KiB/10026msec) 01:10:06.427 slat (nsec): min=5801, max=83410, avg=8466.28, stdev=5534.91 01:10:06.427 clat (usec): min=325, max=41668, avg=19142.35, stdev=20189.39 01:10:06.427 lat (usec): min=331, max=41700, avg=19150.82, stdev=20189.03 01:10:06.427 clat percentiles (usec): 01:10:06.427 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 359], 01:10:06.427 | 30.00th=[ 388], 40.00th=[ 416], 50.00th=[ 627], 60.00th=[40633], 01:10:06.427 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:10:06.427 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 01:10:06.427 | 99.99th=[41681] 01:10:06.427 bw ( KiB/s): min= 576, max= 1280, per=49.85%, avg=835.25, stdev=183.70, samples=20 01:10:06.427 iops : min= 144, max= 320, avg=208.80, stdev=45.91, samples=20 01:10:06.427 lat (usec) : 500=47.51%, 750=5.64%, 1000=0.19% 01:10:06.427 lat (msec) : 2=0.38%, 50=46.27% 01:10:06.427 cpu : usr=92.99%, sys=6.53%, ctx=74, majf=0, minf=0 01:10:06.427 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:06.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:06.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:06.427 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:06.427 latency : target=0, window=0, percentile=100.00%, depth=4 01:10:06.427 01:10:06.427 Run status group 0 (all jobs): 01:10:06.427 READ: bw=1675KiB/s (1715kB/s), 835KiB/s-841KiB/s (855kB/s-861kB/s), io=16.4MiB (17.2MB), run=10026-10029msec 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:06.427 ************************************ 01:10:06.427 END TEST fio_dif_1_multi_subsystems 01:10:06.427 ************************************ 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.427 01:10:06.427 real 0m11.286s 01:10:06.427 user 0m19.625s 01:10:06.427 sys 0m1.570s 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:06.427 11:05:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:06.427 11:05:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:10:06.427 11:05:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:10:06.427 11:05:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:10:06.427 11:05:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:06.427 11:05:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:06.427 ************************************ 01:10:06.427 START TEST fio_dif_rand_params 01:10:06.427 ************************************ 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:10:06.427 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:06.428 bdev_null0 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:06.428 [2024-07-22 11:05:14.194076] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:06.428 { 01:10:06.428 "params": { 01:10:06.428 "name": "Nvme$subsystem", 01:10:06.428 "trtype": "$TEST_TRANSPORT", 01:10:06.428 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:06.428 "adrfam": "ipv4", 01:10:06.428 "trsvcid": "$NVMF_PORT", 01:10:06.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:06.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:06.428 "hdgst": ${hdgst:-false}, 01:10:06.428 "ddgst": ${ddgst:-false} 01:10:06.428 }, 01:10:06.428 "method": "bdev_nvme_attach_controller" 01:10:06.428 } 01:10:06.428 EOF 01:10:06.428 )") 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:06.428 "params": { 01:10:06.428 "name": "Nvme0", 01:10:06.428 "trtype": "tcp", 01:10:06.428 "traddr": "10.0.0.2", 01:10:06.428 "adrfam": "ipv4", 01:10:06.428 "trsvcid": "4420", 01:10:06.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:06.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:06.428 "hdgst": false, 01:10:06.428 "ddgst": false 01:10:06.428 }, 01:10:06.428 "method": "bdev_nvme_attach_controller" 01:10:06.428 }' 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:10:06.428 11:05:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:06.686 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:10:06.686 ... 01:10:06.686 fio-3.35 01:10:06.686 Starting 3 threads 01:10:13.251 01:10:13.251 filename0: (groupid=0, jobs=1): err= 0: pid=117276: Mon Jul 22 11:05:19 2024 01:10:13.251 read: IOPS=245, BW=30.6MiB/s (32.1MB/s)(153MiB/5004msec) 01:10:13.251 slat (nsec): min=5921, max=52441, avg=15700.83, stdev=8032.97 01:10:13.251 clat (usec): min=3458, max=50505, avg=12218.27, stdev=13030.24 01:10:13.251 lat (usec): min=3464, max=50513, avg=12233.97, stdev=13030.29 01:10:13.251 clat percentiles (usec): 01:10:13.251 | 1.00th=[ 3523], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6390], 01:10:13.251 | 30.00th=[ 6652], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 8586], 01:10:13.251 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[46924], 95.00th=[49021], 01:10:13.251 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 01:10:13.251 | 99.99th=[50594] 01:10:13.251 bw ( KiB/s): min=20736, max=45568, per=27.28%, avg=31308.80, stdev=8276.94, samples=10 01:10:13.251 iops : min= 162, max= 356, avg=244.60, stdev=64.66, samples=10 01:10:13.251 lat (msec) : 4=3.18%, 10=84.67%, 20=0.90%, 50=10.28%, 100=0.98% 01:10:13.251 cpu : usr=92.96%, sys=5.84%, ctx=11, majf=0, minf=0 01:10:13.251 IO depths : 1=9.0%, 2=91.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:13.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:13.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:13.251 issued rwts: total=1226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:13.251 latency : target=0, window=0, percentile=100.00%, depth=3 01:10:13.251 filename0: (groupid=0, jobs=1): err= 0: pid=117277: Mon Jul 22 11:05:19 2024 01:10:13.251 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(153MiB/5005msec) 01:10:13.251 slat (nsec): min=5811, max=46156, avg=11538.29, stdev=6725.47 01:10:13.251 clat (usec): min=3268, max=51793, avg=12257.50, stdev=12716.31 01:10:13.251 lat (usec): min=3274, max=51828, avg=12269.04, stdev=12716.54 01:10:13.251 clat percentiles (usec): 01:10:13.251 | 1.00th=[ 3359], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5932], 01:10:13.251 | 30.00th=[ 6194], 40.00th=[ 6587], 50.00th=[ 9110], 60.00th=[ 9634], 01:10:13.251 | 70.00th=[10028], 80.00th=[10552], 90.00th=[45876], 95.00th=[49021], 01:10:13.251 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 01:10:13.251 | 99.99th=[51643] 01:10:13.251 bw ( KiB/s): min=21504, max=41472, per=27.22%, avg=31232.00, stdev=6148.74, samples=10 01:10:13.251 iops : min= 168, max= 324, avg=244.00, stdev=48.04, samples=10 01:10:13.251 lat (msec) : 4=1.47%, 10=68.93%, 20=19.05%, 50=6.21%, 100=4.33% 01:10:13.251 cpu : usr=95.14%, sys=3.50%, ctx=47, majf=0, minf=0 01:10:13.251 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:13.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:13.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:13.251 issued rwts: total=1223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:13.251 latency : target=0, window=0, percentile=100.00%, depth=3 01:10:13.251 filename0: (groupid=0, jobs=1): err= 0: pid=117278: Mon Jul 22 11:05:19 2024 01:10:13.251 read: IOPS=407, BW=50.9MiB/s (53.4MB/s)(255MiB/5004msec) 01:10:13.251 slat (nsec): min=5861, max=44077, avg=12294.43, stdev=7542.26 01:10:13.251 clat (usec): min=2891, max=48856, avg=7350.17, stdev=3914.53 01:10:13.251 lat (usec): min=2899, max=48866, avg=7362.47, stdev=3916.97 01:10:13.251 clat percentiles (usec): 01:10:13.251 | 1.00th=[ 3097], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3458], 01:10:13.251 | 30.00th=[ 5669], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7439], 01:10:13.251 | 70.00th=[ 8455], 80.00th=[10683], 90.00th=[11338], 95.00th=[11863], 01:10:13.251 | 99.00th=[12649], 99.50th=[13304], 99.90th=[47973], 99.95th=[48497], 01:10:13.251 | 99.99th=[49021] 01:10:13.251 bw ( KiB/s): min=43520, max=65280, per=45.40%, avg=52096.00, stdev=6482.24, samples=10 01:10:13.251 iops : min= 340, max= 510, avg=407.00, stdev=50.64, samples=10 01:10:13.251 lat (msec) : 4=26.15%, 10=49.75%, 20=23.65%, 50=0.44% 01:10:13.251 cpu : usr=94.18%, sys=4.40%, ctx=50, majf=0, minf=0 01:10:13.251 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:13.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:13.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:13.251 issued rwts: total=2038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:13.251 latency : target=0, window=0, percentile=100.00%, depth=3 01:10:13.251 01:10:13.251 Run status group 0 (all jobs): 01:10:13.251 READ: bw=112MiB/s (118MB/s), 30.5MiB/s-50.9MiB/s (32.0MB/s-53.4MB/s), io=561MiB (588MB), run=5004-5005msec 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 bdev_null0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 [2024-07-22 11:05:20.202138] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 bdev_null1 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 bdev_null2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:13.251 { 01:10:13.251 "params": { 01:10:13.251 "name": "Nvme$subsystem", 01:10:13.251 "trtype": "$TEST_TRANSPORT", 01:10:13.251 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:13.251 "adrfam": "ipv4", 01:10:13.251 "trsvcid": "$NVMF_PORT", 01:10:13.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:13.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:13.251 "hdgst": ${hdgst:-false}, 01:10:13.251 "ddgst": ${ddgst:-false} 01:10:13.251 }, 01:10:13.251 "method": "bdev_nvme_attach_controller" 01:10:13.251 } 01:10:13.251 EOF 01:10:13.251 )") 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:10:13.251 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:13.252 { 01:10:13.252 "params": { 01:10:13.252 "name": "Nvme$subsystem", 01:10:13.252 "trtype": "$TEST_TRANSPORT", 01:10:13.252 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:13.252 "adrfam": "ipv4", 01:10:13.252 "trsvcid": "$NVMF_PORT", 01:10:13.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:13.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:13.252 "hdgst": ${hdgst:-false}, 01:10:13.252 "ddgst": ${ddgst:-false} 01:10:13.252 }, 01:10:13.252 "method": "bdev_nvme_attach_controller" 01:10:13.252 } 01:10:13.252 EOF 01:10:13.252 )") 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:13.252 { 01:10:13.252 "params": { 01:10:13.252 "name": "Nvme$subsystem", 01:10:13.252 "trtype": "$TEST_TRANSPORT", 01:10:13.252 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:13.252 "adrfam": "ipv4", 01:10:13.252 "trsvcid": "$NVMF_PORT", 01:10:13.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:13.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:13.252 "hdgst": ${hdgst:-false}, 01:10:13.252 "ddgst": ${ddgst:-false} 01:10:13.252 }, 01:10:13.252 "method": "bdev_nvme_attach_controller" 01:10:13.252 } 01:10:13.252 EOF 01:10:13.252 )") 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:13.252 "params": { 01:10:13.252 "name": "Nvme0", 01:10:13.252 "trtype": "tcp", 01:10:13.252 "traddr": "10.0.0.2", 01:10:13.252 "adrfam": "ipv4", 01:10:13.252 "trsvcid": "4420", 01:10:13.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:13.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:13.252 "hdgst": false, 01:10:13.252 "ddgst": false 01:10:13.252 }, 01:10:13.252 "method": "bdev_nvme_attach_controller" 01:10:13.252 },{ 01:10:13.252 "params": { 01:10:13.252 "name": "Nvme1", 01:10:13.252 "trtype": "tcp", 01:10:13.252 "traddr": "10.0.0.2", 01:10:13.252 "adrfam": "ipv4", 01:10:13.252 "trsvcid": "4420", 01:10:13.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:10:13.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:10:13.252 "hdgst": false, 01:10:13.252 "ddgst": false 01:10:13.252 }, 01:10:13.252 "method": "bdev_nvme_attach_controller" 01:10:13.252 },{ 01:10:13.252 "params": { 01:10:13.252 "name": "Nvme2", 01:10:13.252 "trtype": "tcp", 01:10:13.252 "traddr": "10.0.0.2", 01:10:13.252 "adrfam": "ipv4", 01:10:13.252 "trsvcid": "4420", 01:10:13.252 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:10:13.252 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:10:13.252 "hdgst": false, 01:10:13.252 "ddgst": false 01:10:13.252 }, 01:10:13.252 "method": "bdev_nvme_attach_controller" 01:10:13.252 }' 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:10:13.252 11:05:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:13.252 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:10:13.252 ... 01:10:13.252 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:10:13.252 ... 01:10:13.252 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:10:13.252 ... 01:10:13.252 fio-3.35 01:10:13.252 Starting 24 threads 01:10:25.511 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117374: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=269, BW=1078KiB/s (1103kB/s)(10.5MiB/10023msec) 01:10:25.511 slat (usec): min=2, max=8022, avg=18.94, stdev=267.03 01:10:25.511 clat (msec): min=24, max=123, avg=59.25, stdev=14.70 01:10:25.511 lat (msec): min=24, max=123, avg=59.27, stdev=14.69 01:10:25.511 clat percentiles (msec): 01:10:25.511 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 01:10:25.511 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 61], 01:10:25.511 | 70.00th=[ 67], 80.00th=[ 71], 90.00th=[ 75], 95.00th=[ 85], 01:10:25.511 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 124], 99.95th=[ 124], 01:10:25.511 | 99.99th=[ 124] 01:10:25.511 bw ( KiB/s): min= 848, max= 1200, per=3.74%, avg=1069.47, stdev=99.67, samples=19 01:10:25.511 iops : min= 212, max= 300, avg=267.37, stdev=24.92, samples=19 01:10:25.511 lat (msec) : 50=30.22%, 100=67.70%, 250=2.07% 01:10:25.511 cpu : usr=41.03%, sys=1.02%, ctx=1188, majf=0, minf=9 01:10:25.511 IO depths : 1=2.6%, 2=6.0%, 4=16.0%, 8=64.7%, 16=10.8%, 32=0.0%, >=64=0.0% 01:10:25.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 issued rwts: total=2700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.511 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117375: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=343, BW=1374KiB/s (1407kB/s)(13.5MiB/10038msec) 01:10:25.511 slat (usec): min=5, max=8026, avg=16.15, stdev=180.92 01:10:25.511 clat (usec): min=1592, max=107836, avg=46405.20, stdev=14667.58 01:10:25.511 lat (usec): min=1604, max=107842, avg=46421.35, stdev=14670.29 01:10:25.511 clat percentiles (msec): 01:10:25.511 | 1.00th=[ 3], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 35], 01:10:25.511 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 47], 60.00th=[ 50], 01:10:25.511 | 70.00th=[ 55], 80.00th=[ 59], 90.00th=[ 65], 95.00th=[ 72], 01:10:25.511 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 88], 99.95th=[ 108], 01:10:25.511 | 99.99th=[ 108] 01:10:25.511 bw ( KiB/s): min= 1104, max= 1840, per=4.81%, avg=1376.30, stdev=179.93, samples=20 01:10:25.511 iops : min= 276, max= 460, avg=344.05, stdev=44.96, samples=20 01:10:25.511 lat (msec) : 2=0.06%, 4=1.80%, 10=0.93%, 20=0.12%, 50=60.32% 01:10:25.511 lat (msec) : 100=36.72%, 250=0.06% 01:10:25.511 cpu : usr=42.84%, sys=1.02%, ctx=1243, majf=0, minf=9 01:10:25.511 IO depths : 1=0.7%, 2=1.8%, 4=9.3%, 8=75.3%, 16=12.9%, 32=0.0%, >=64=0.0% 01:10:25.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 issued rwts: total=3448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.511 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117376: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=321, BW=1285KiB/s (1316kB/s)(12.6MiB/10044msec) 01:10:25.511 slat (usec): min=3, max=4034, avg=16.40, stdev=150.74 01:10:25.511 clat (msec): min=18, max=115, avg=49.63, stdev=15.02 01:10:25.511 lat (msec): min=18, max=115, avg=49.65, stdev=15.02 01:10:25.511 clat percentiles (msec): 01:10:25.511 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 36], 01:10:25.511 | 30.00th=[ 39], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 54], 01:10:25.511 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 73], 01:10:25.511 | 99.00th=[ 92], 99.50th=[ 96], 99.90th=[ 116], 99.95th=[ 116], 01:10:25.511 | 99.99th=[ 116] 01:10:25.511 bw ( KiB/s): min= 896, max= 1632, per=4.48%, avg=1283.95, stdev=199.48, samples=20 01:10:25.511 iops : min= 224, max= 408, avg=320.95, stdev=49.91, samples=20 01:10:25.511 lat (msec) : 20=0.19%, 50=56.03%, 100=43.29%, 250=0.50% 01:10:25.511 cpu : usr=39.17%, sys=0.99%, ctx=1315, majf=0, minf=9 01:10:25.511 IO depths : 1=0.5%, 2=1.3%, 4=8.1%, 8=77.2%, 16=12.9%, 32=0.0%, >=64=0.0% 01:10:25.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 complete : 0=0.0%, 4=89.5%, 8=5.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 issued rwts: total=3227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.511 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117377: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=275, BW=1100KiB/s (1126kB/s)(10.8MiB/10018msec) 01:10:25.511 slat (usec): min=2, max=8037, avg=16.07, stdev=172.53 01:10:25.511 clat (msec): min=24, max=106, avg=58.04, stdev=13.70 01:10:25.511 lat (msec): min=24, max=106, avg=58.05, stdev=13.70 01:10:25.511 clat percentiles (msec): 01:10:25.511 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 01:10:25.511 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 59], 01:10:25.511 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 83], 01:10:25.511 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 107], 01:10:25.511 | 99.99th=[ 107] 01:10:25.511 bw ( KiB/s): min= 896, max= 1328, per=3.87%, avg=1106.11, stdev=96.55, samples=19 01:10:25.511 iops : min= 224, max= 332, avg=276.53, stdev=24.14, samples=19 01:10:25.511 lat (msec) : 50=28.17%, 100=71.07%, 250=0.76% 01:10:25.511 cpu : usr=42.60%, sys=1.08%, ctx=1345, majf=0, minf=9 01:10:25.511 IO depths : 1=2.5%, 2=5.7%, 4=15.5%, 8=65.5%, 16=10.8%, 32=0.0%, >=64=0.0% 01:10:25.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 issued rwts: total=2755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.511 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117378: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=273, BW=1095KiB/s (1122kB/s)(10.7MiB/10018msec) 01:10:25.511 slat (usec): min=3, max=8037, avg=19.50, stdev=257.71 01:10:25.511 clat (msec): min=23, max=107, avg=58.31, stdev=15.35 01:10:25.511 lat (msec): min=23, max=107, avg=58.33, stdev=15.36 01:10:25.511 clat percentiles (msec): 01:10:25.511 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 48], 01:10:25.511 | 30.00th=[ 49], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 61], 01:10:25.511 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 85], 01:10:25.511 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 108], 01:10:25.511 | 99.99th=[ 108] 01:10:25.511 bw ( KiB/s): min= 944, max= 1384, per=3.81%, avg=1091.84, stdev=112.89, samples=19 01:10:25.511 iops : min= 236, max= 346, avg=272.95, stdev=28.24, samples=19 01:10:25.511 lat (msec) : 50=35.29%, 100=64.38%, 250=0.33% 01:10:25.511 cpu : usr=32.44%, sys=0.71%, ctx=960, majf=0, minf=9 01:10:25.511 IO depths : 1=1.5%, 2=3.3%, 4=10.8%, 8=72.2%, 16=12.2%, 32=0.0%, >=64=0.0% 01:10:25.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.511 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117379: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.3MiB/10051msec) 01:10:25.511 slat (usec): min=4, max=11015, avg=21.15, stdev=323.45 01:10:25.511 clat (msec): min=22, max=108, avg=50.80, stdev=13.94 01:10:25.511 lat (msec): min=22, max=108, avg=50.82, stdev=13.95 01:10:25.511 clat percentiles (msec): 01:10:25.511 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 36], 01:10:25.511 | 30.00th=[ 40], 40.00th=[ 48], 50.00th=[ 48], 60.00th=[ 56], 01:10:25.511 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 73], 01:10:25.511 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 109], 99.95th=[ 109], 01:10:25.511 | 99.99th=[ 109] 01:10:25.511 bw ( KiB/s): min= 1072, max= 1504, per=4.39%, avg=1255.65, stdev=110.88, samples=20 01:10:25.511 iops : min= 268, max= 376, avg=313.90, stdev=27.73, samples=20 01:10:25.511 lat (msec) : 50=55.79%, 100=43.95%, 250=0.25% 01:10:25.511 cpu : usr=32.46%, sys=0.75%, ctx=946, majf=0, minf=9 01:10:25.511 IO depths : 1=0.5%, 2=1.6%, 4=8.7%, 8=75.9%, 16=13.2%, 32=0.0%, >=64=0.0% 01:10:25.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.511 issued rwts: total=3158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.511 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.511 filename0: (groupid=0, jobs=1): err= 0: pid=117380: Mon Jul 22 11:05:31 2024 01:10:25.511 read: IOPS=262, BW=1048KiB/s (1073kB/s)(10.3MiB/10022msec) 01:10:25.511 slat (usec): min=3, max=8034, avg=15.55, stdev=175.13 01:10:25.512 clat (msec): min=21, max=117, avg=60.96, stdev=16.17 01:10:25.512 lat (msec): min=21, max=117, avg=60.98, stdev=16.16 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 01:10:25.512 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 65], 01:10:25.512 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 90], 01:10:25.512 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 118], 99.95th=[ 118], 01:10:25.512 | 99.99th=[ 118] 01:10:25.512 bw ( KiB/s): min= 896, max= 1248, per=3.63%, avg=1038.37, stdev=102.29, samples=19 01:10:25.512 iops : min= 224, max= 312, avg=259.58, stdev=25.59, samples=19 01:10:25.512 lat (msec) : 50=28.10%, 100=69.00%, 250=2.89% 01:10:25.512 cpu : usr=38.85%, sys=1.24%, ctx=1087, majf=0, minf=9 01:10:25.512 IO depths : 1=2.8%, 2=6.0%, 4=16.3%, 8=64.5%, 16=10.5%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=2626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename0: (groupid=0, jobs=1): err= 0: pid=117381: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=279, BW=1118KiB/s (1145kB/s)(10.9MiB/10016msec) 01:10:25.512 slat (usec): min=5, max=8020, avg=13.32, stdev=151.55 01:10:25.512 clat (msec): min=21, max=107, avg=57.13, stdev=15.76 01:10:25.512 lat (msec): min=21, max=107, avg=57.14, stdev=15.76 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 01:10:25.512 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 01:10:25.512 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 84], 01:10:25.512 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 108], 99.95th=[ 108], 01:10:25.512 | 99.99th=[ 108] 01:10:25.512 bw ( KiB/s): min= 896, max= 1328, per=3.89%, avg=1113.15, stdev=111.40, samples=20 01:10:25.512 iops : min= 224, max= 332, avg=278.25, stdev=27.85, samples=20 01:10:25.512 lat (msec) : 50=38.96%, 100=59.39%, 250=1.64% 01:10:25.512 cpu : usr=36.50%, sys=0.87%, ctx=1141, majf=0, minf=9 01:10:25.512 IO depths : 1=2.0%, 2=4.5%, 4=13.6%, 8=68.6%, 16=11.2%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117382: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=270, BW=1084KiB/s (1110kB/s)(10.6MiB/10019msec) 01:10:25.512 slat (usec): min=2, max=4029, avg=14.23, stdev=109.37 01:10:25.512 clat (msec): min=25, max=120, avg=58.96, stdev=14.70 01:10:25.512 lat (msec): min=25, max=120, avg=58.97, stdev=14.70 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 48], 01:10:25.512 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 61], 01:10:25.512 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 78], 95.00th=[ 90], 01:10:25.512 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 122], 99.95th=[ 122], 01:10:25.512 | 99.99th=[ 122] 01:10:25.512 bw ( KiB/s): min= 894, max= 1328, per=3.80%, avg=1088.74, stdev=110.01, samples=19 01:10:25.512 iops : min= 223, max= 332, avg=272.16, stdev=27.55, samples=19 01:10:25.512 lat (msec) : 50=27.12%, 100=72.44%, 250=0.44% 01:10:25.512 cpu : usr=44.19%, sys=1.18%, ctx=1302, majf=0, minf=9 01:10:25.512 IO depths : 1=2.2%, 2=5.4%, 4=15.3%, 8=65.8%, 16=11.2%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=91.6%, 8=3.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=2714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117383: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=301, BW=1205KiB/s (1234kB/s)(11.8MiB/10029msec) 01:10:25.512 slat (usec): min=3, max=8020, avg=19.29, stdev=214.62 01:10:25.512 clat (msec): min=23, max=111, avg=52.91, stdev=15.28 01:10:25.512 lat (msec): min=23, max=111, avg=52.93, stdev=15.28 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 01:10:25.512 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 01:10:25.512 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 73], 95.00th=[ 82], 01:10:25.512 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 105], 99.95th=[ 112], 01:10:25.512 | 99.99th=[ 112] 01:10:25.512 bw ( KiB/s): min= 1008, max= 1632, per=4.20%, avg=1202.50, stdev=134.36, samples=20 01:10:25.512 iops : min= 252, max= 408, avg=300.60, stdev=33.58, samples=20 01:10:25.512 lat (msec) : 50=49.01%, 100=50.76%, 250=0.23% 01:10:25.512 cpu : usr=39.62%, sys=0.90%, ctx=1168, majf=0, minf=9 01:10:25.512 IO depths : 1=1.7%, 2=4.0%, 4=12.3%, 8=70.5%, 16=11.4%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=3022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117384: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=276, BW=1105KiB/s (1132kB/s)(10.8MiB/10025msec) 01:10:25.512 slat (usec): min=4, max=8020, avg=15.22, stdev=169.17 01:10:25.512 clat (msec): min=25, max=110, avg=57.71, stdev=14.75 01:10:25.512 lat (msec): min=25, max=110, avg=57.73, stdev=14.76 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 48], 01:10:25.512 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 01:10:25.512 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 78], 95.00th=[ 85], 01:10:25.512 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 106], 99.95th=[ 110], 01:10:25.512 | 99.99th=[ 110] 01:10:25.512 bw ( KiB/s): min= 952, max= 1384, per=3.84%, avg=1098.58, stdev=118.20, samples=19 01:10:25.512 iops : min= 238, max= 346, avg=274.63, stdev=29.55, samples=19 01:10:25.512 lat (msec) : 50=34.66%, 100=65.13%, 250=0.22% 01:10:25.512 cpu : usr=39.71%, sys=1.03%, ctx=1182, majf=0, minf=9 01:10:25.512 IO depths : 1=2.5%, 2=5.6%, 4=16.3%, 8=65.2%, 16=10.4%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117385: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=319, BW=1280KiB/s (1311kB/s)(12.6MiB/10057msec) 01:10:25.512 slat (usec): min=5, max=8021, avg=12.17, stdev=141.37 01:10:25.512 clat (msec): min=2, max=116, avg=49.90, stdev=16.61 01:10:25.512 lat (msec): min=2, max=116, avg=49.91, stdev=16.61 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 3], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 36], 01:10:25.512 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 56], 01:10:25.512 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 71], 95.00th=[ 74], 01:10:25.512 | 99.00th=[ 90], 99.50th=[ 96], 99.90th=[ 117], 99.95th=[ 117], 01:10:25.512 | 99.99th=[ 117] 01:10:25.512 bw ( KiB/s): min= 912, max= 2138, per=4.47%, avg=1280.15, stdev=269.99, samples=20 01:10:25.512 iops : min= 228, max= 534, avg=320.00, stdev=67.43, samples=20 01:10:25.512 lat (msec) : 4=2.30%, 10=1.18%, 20=0.50%, 50=49.35%, 100=46.52% 01:10:25.512 lat (msec) : 250=0.16% 01:10:25.512 cpu : usr=33.40%, sys=0.85%, ctx=974, majf=0, minf=0 01:10:25.512 IO depths : 1=0.7%, 2=1.8%, 4=8.5%, 8=75.9%, 16=13.0%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=3218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117386: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10051msec) 01:10:25.512 slat (usec): min=3, max=8028, avg=24.07, stdev=336.78 01:10:25.512 clat (msec): min=26, max=119, avg=56.63, stdev=15.09 01:10:25.512 lat (msec): min=26, max=119, avg=56.65, stdev=15.10 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 45], 01:10:25.512 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 61], 01:10:25.512 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 77], 95.00th=[ 84], 01:10:25.512 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 121], 99.95th=[ 121], 01:10:25.512 | 99.99th=[ 121] 01:10:25.512 bw ( KiB/s): min= 936, max= 1304, per=3.93%, avg=1124.80, stdev=112.01, samples=20 01:10:25.512 iops : min= 234, max= 326, avg=281.20, stdev=28.00, samples=20 01:10:25.512 lat (msec) : 50=37.48%, 100=61.81%, 250=0.71% 01:10:25.512 cpu : usr=35.75%, sys=0.90%, ctx=984, majf=0, minf=9 01:10:25.512 IO depths : 1=1.1%, 2=2.8%, 4=11.0%, 8=72.7%, 16=12.4%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117387: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=312, BW=1251KiB/s (1281kB/s)(12.3MiB/10040msec) 01:10:25.512 slat (usec): min=2, max=4031, avg=15.18, stdev=124.37 01:10:25.512 clat (usec): min=24358, max=99867, avg=50998.95, stdev=13416.03 01:10:25.512 lat (usec): min=24365, max=99873, avg=51014.14, stdev=13419.37 01:10:25.512 clat percentiles (msec): 01:10:25.512 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 01:10:25.512 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 53], 01:10:25.512 | 70.00th=[ 57], 80.00th=[ 62], 90.00th=[ 70], 95.00th=[ 78], 01:10:25.512 | 99.00th=[ 90], 99.50th=[ 91], 99.90th=[ 94], 99.95th=[ 101], 01:10:25.512 | 99.99th=[ 101] 01:10:25.512 bw ( KiB/s): min= 1008, max= 1536, per=4.37%, avg=1250.10, stdev=144.23, samples=20 01:10:25.512 iops : min= 252, max= 384, avg=312.50, stdev=36.05, samples=20 01:10:25.512 lat (msec) : 50=50.21%, 100=49.79% 01:10:25.512 cpu : usr=44.23%, sys=1.13%, ctx=1418, majf=0, minf=9 01:10:25.512 IO depths : 1=1.5%, 2=3.7%, 4=11.9%, 8=71.0%, 16=11.9%, 32=0.0%, >=64=0.0% 01:10:25.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.512 issued rwts: total=3141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.512 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.512 filename1: (groupid=0, jobs=1): err= 0: pid=117388: Mon Jul 22 11:05:31 2024 01:10:25.512 read: IOPS=310, BW=1244KiB/s (1273kB/s)(12.2MiB/10023msec) 01:10:25.512 slat (usec): min=5, max=8020, avg=18.68, stdev=258.75 01:10:25.512 clat (msec): min=23, max=120, avg=51.36, stdev=15.28 01:10:25.512 lat (msec): min=23, max=120, avg=51.38, stdev=15.28 01:10:25.512 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 37], 01:10:25.513 | 30.00th=[ 42], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 52], 01:10:25.513 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 81], 01:10:25.513 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 122], 99.95th=[ 122], 01:10:25.513 | 99.99th=[ 122] 01:10:25.513 bw ( KiB/s): min= 976, max= 1552, per=4.33%, avg=1240.00, stdev=167.69, samples=20 01:10:25.513 iops : min= 244, max= 388, avg=310.00, stdev=41.92, samples=20 01:10:25.513 lat (msec) : 50=58.54%, 100=40.95%, 250=0.51% 01:10:25.513 cpu : usr=35.92%, sys=0.98%, ctx=995, majf=0, minf=9 01:10:25.513 IO depths : 1=1.0%, 2=2.3%, 4=9.2%, 8=74.8%, 16=12.7%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=3116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename1: (groupid=0, jobs=1): err= 0: pid=117389: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=301, BW=1208KiB/s (1237kB/s)(11.8MiB/10044msec) 01:10:25.513 slat (usec): min=5, max=8019, avg=15.94, stdev=218.20 01:10:25.513 clat (msec): min=16, max=107, avg=52.76, stdev=15.61 01:10:25.513 lat (msec): min=16, max=107, avg=52.78, stdev=15.61 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 37], 01:10:25.513 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 58], 01:10:25.513 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 83], 01:10:25.513 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 01:10:25.513 | 99.99th=[ 108] 01:10:25.513 bw ( KiB/s): min= 1015, max= 1632, per=4.23%, avg=1210.75, stdev=148.39, samples=20 01:10:25.513 iops : min= 253, max= 408, avg=302.65, stdev=37.15, samples=20 01:10:25.513 lat (msec) : 20=0.53%, 50=50.08%, 100=49.26%, 250=0.13% 01:10:25.513 cpu : usr=35.34%, sys=0.86%, ctx=993, majf=0, minf=9 01:10:25.513 IO depths : 1=0.6%, 2=1.9%, 4=9.4%, 8=75.1%, 16=12.9%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=3033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117390: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=360, BW=1441KiB/s (1476kB/s)(14.1MiB/10001msec) 01:10:25.513 slat (usec): min=4, max=4022, avg=13.49, stdev=130.97 01:10:25.513 clat (msec): min=3, max=108, avg=44.32, stdev=13.61 01:10:25.513 lat (msec): min=3, max=108, avg=44.34, stdev=13.61 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 34], 01:10:25.513 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 43], 60.00th=[ 47], 01:10:25.513 | 70.00th=[ 51], 80.00th=[ 55], 90.00th=[ 63], 95.00th=[ 71], 01:10:25.513 | 99.00th=[ 85], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 109], 01:10:25.513 | 99.99th=[ 109] 01:10:25.513 bw ( KiB/s): min= 1152, max= 1680, per=5.09%, avg=1455.95, stdev=154.75, samples=19 01:10:25.513 iops : min= 288, max= 420, avg=363.95, stdev=38.66, samples=19 01:10:25.513 lat (msec) : 4=0.44%, 10=0.89%, 50=68.72%, 100=29.75%, 250=0.19% 01:10:25.513 cpu : usr=43.98%, sys=1.16%, ctx=1288, majf=0, minf=9 01:10:25.513 IO depths : 1=0.4%, 2=0.9%, 4=6.9%, 8=78.5%, 16=13.3%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=3603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117391: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=303, BW=1216KiB/s (1245kB/s)(11.9MiB/10032msec) 01:10:25.513 slat (nsec): min=4268, max=73245, avg=9244.50, stdev=5475.37 01:10:25.513 clat (msec): min=23, max=109, avg=52.52, stdev=15.25 01:10:25.513 lat (msec): min=23, max=109, avg=52.52, stdev=15.25 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 36], 01:10:25.513 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 58], 01:10:25.513 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 81], 01:10:25.513 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 110], 99.95th=[ 110], 01:10:25.513 | 99.99th=[ 110] 01:10:25.513 bw ( KiB/s): min= 984, max= 1472, per=4.24%, avg=1213.30, stdev=130.05, samples=20 01:10:25.513 iops : min= 246, max= 368, avg=303.30, stdev=32.55, samples=20 01:10:25.513 lat (msec) : 50=51.66%, 100=48.15%, 250=0.20% 01:10:25.513 cpu : usr=32.27%, sys=0.87%, ctx=927, majf=0, minf=9 01:10:25.513 IO depths : 1=0.6%, 2=1.1%, 4=7.2%, 8=77.8%, 16=13.3%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=89.4%, 8=6.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=3049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117392: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=279, BW=1117KiB/s (1144kB/s)(10.9MiB/10030msec) 01:10:25.513 slat (usec): min=2, max=4016, avg=12.78, stdev=76.07 01:10:25.513 clat (msec): min=25, max=127, avg=57.14, stdev=15.31 01:10:25.513 lat (msec): min=25, max=127, avg=57.15, stdev=15.32 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 46], 01:10:25.513 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 60], 01:10:25.513 | 70.00th=[ 64], 80.00th=[ 68], 90.00th=[ 75], 95.00th=[ 84], 01:10:25.513 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 128], 99.95th=[ 128], 01:10:25.513 | 99.99th=[ 128] 01:10:25.513 bw ( KiB/s): min= 896, max= 1408, per=3.90%, avg=1116.90, stdev=120.88, samples=20 01:10:25.513 iops : min= 224, max= 352, avg=279.20, stdev=30.20, samples=20 01:10:25.513 lat (msec) : 50=34.17%, 100=63.91%, 250=1.93% 01:10:25.513 cpu : usr=42.52%, sys=1.13%, ctx=1418, majf=0, minf=9 01:10:25.513 IO depths : 1=1.3%, 2=3.2%, 4=10.2%, 8=71.9%, 16=13.5%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=90.7%, 8=5.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=2801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117393: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=297, BW=1191KiB/s (1220kB/s)(11.7MiB/10018msec) 01:10:25.513 slat (usec): min=3, max=4054, avg=13.78, stdev=127.71 01:10:25.513 clat (msec): min=22, max=107, avg=53.64, stdev=15.61 01:10:25.513 lat (msec): min=22, max=107, avg=53.65, stdev=15.61 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 01:10:25.513 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 58], 01:10:25.513 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 84], 01:10:25.513 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 108], 99.95th=[ 108], 01:10:25.513 | 99.99th=[ 108] 01:10:25.513 bw ( KiB/s): min= 896, max= 1384, per=4.15%, avg=1187.40, stdev=133.35, samples=20 01:10:25.513 iops : min= 224, max= 346, avg=296.80, stdev=33.34, samples=20 01:10:25.513 lat (msec) : 50=51.74%, 100=47.99%, 250=0.27% 01:10:25.513 cpu : usr=35.39%, sys=1.00%, ctx=1021, majf=0, minf=9 01:10:25.513 IO depths : 1=0.7%, 2=1.9%, 4=8.9%, 8=75.6%, 16=12.9%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117394: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=340, BW=1363KiB/s (1395kB/s)(13.4MiB/10050msec) 01:10:25.513 slat (usec): min=3, max=8027, avg=12.87, stdev=149.15 01:10:25.513 clat (msec): min=17, max=117, avg=46.89, stdev=14.42 01:10:25.513 lat (msec): min=17, max=117, avg=46.91, stdev=14.42 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 35], 01:10:25.513 | 30.00th=[ 37], 40.00th=[ 40], 50.00th=[ 46], 60.00th=[ 50], 01:10:25.513 | 70.00th=[ 53], 80.00th=[ 59], 90.00th=[ 66], 95.00th=[ 74], 01:10:25.513 | 99.00th=[ 88], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 118], 01:10:25.513 | 99.99th=[ 118] 01:10:25.513 bw ( KiB/s): min= 1112, max= 1624, per=4.76%, avg=1363.30, stdev=136.67, samples=20 01:10:25.513 iops : min= 278, max= 406, avg=340.80, stdev=34.17, samples=20 01:10:25.513 lat (msec) : 20=0.18%, 50=63.46%, 100=35.66%, 250=0.70% 01:10:25.513 cpu : usr=41.10%, sys=1.03%, ctx=1312, majf=0, minf=9 01:10:25.513 IO depths : 1=0.6%, 2=1.4%, 4=8.6%, 8=76.5%, 16=12.9%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117395: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=328, BW=1316KiB/s (1347kB/s)(12.9MiB/10024msec) 01:10:25.513 slat (usec): min=5, max=8022, avg=16.81, stdev=220.57 01:10:25.513 clat (msec): min=19, max=120, avg=48.50, stdev=14.65 01:10:25.513 lat (msec): min=19, max=120, avg=48.52, stdev=14.66 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 36], 01:10:25.513 | 30.00th=[ 38], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 51], 01:10:25.513 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 73], 01:10:25.513 | 99.00th=[ 91], 99.50th=[ 96], 99.90th=[ 121], 99.95th=[ 121], 01:10:25.513 | 99.99th=[ 121] 01:10:25.513 bw ( KiB/s): min= 1000, max= 1600, per=4.59%, avg=1314.20, stdev=157.15, samples=20 01:10:25.513 iops : min= 250, max= 400, avg=328.55, stdev=39.29, samples=20 01:10:25.513 lat (msec) : 20=0.12%, 50=60.60%, 100=39.01%, 250=0.27% 01:10:25.513 cpu : usr=36.26%, sys=0.87%, ctx=1134, majf=0, minf=9 01:10:25.513 IO depths : 1=0.1%, 2=0.2%, 4=4.8%, 8=80.8%, 16=14.1%, 32=0.0%, >=64=0.0% 01:10:25.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 complete : 0=0.0%, 4=88.9%, 8=7.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.513 issued rwts: total=3297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.513 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.513 filename2: (groupid=0, jobs=1): err= 0: pid=117396: Mon Jul 22 11:05:31 2024 01:10:25.513 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10029msec) 01:10:25.513 slat (usec): min=2, max=7954, avg=13.07, stdev=152.75 01:10:25.513 clat (msec): min=24, max=143, avg=59.12, stdev=16.44 01:10:25.513 lat (msec): min=24, max=143, avg=59.13, stdev=16.44 01:10:25.513 clat percentiles (msec): 01:10:25.513 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 01:10:25.513 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 01:10:25.514 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 87], 01:10:25.514 | 99.00th=[ 108], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 01:10:25.514 | 99.99th=[ 144] 01:10:25.514 bw ( KiB/s): min= 888, max= 1304, per=3.76%, avg=1077.20, stdev=121.72, samples=20 01:10:25.514 iops : min= 222, max= 326, avg=269.30, stdev=30.43, samples=20 01:10:25.514 lat (msec) : 50=35.24%, 100=62.25%, 250=2.51% 01:10:25.514 cpu : usr=32.87%, sys=0.78%, ctx=989, majf=0, minf=9 01:10:25.514 IO depths : 1=2.0%, 2=4.3%, 4=12.7%, 8=69.7%, 16=11.3%, 32=0.0%, >=64=0.0% 01:10:25.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.514 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.514 issued rwts: total=2710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.514 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.514 filename2: (groupid=0, jobs=1): err= 0: pid=117397: Mon Jul 22 11:05:31 2024 01:10:25.514 read: IOPS=276, BW=1105KiB/s (1131kB/s)(10.8MiB/10030msec) 01:10:25.514 slat (usec): min=2, max=8020, avg=21.01, stdev=270.48 01:10:25.514 clat (msec): min=22, max=117, avg=57.72, stdev=14.51 01:10:25.514 lat (msec): min=22, max=117, avg=57.74, stdev=14.52 01:10:25.514 clat percentiles (msec): 01:10:25.514 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 48], 01:10:25.514 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 01:10:25.514 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 72], 95.00th=[ 85], 01:10:25.514 | 99.00th=[ 95], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 118], 01:10:25.514 | 99.99th=[ 118] 01:10:25.514 bw ( KiB/s): min= 896, max= 1344, per=3.85%, avg=1101.70, stdev=104.57, samples=20 01:10:25.514 iops : min= 224, max= 336, avg=275.40, stdev=26.15, samples=20 01:10:25.514 lat (msec) : 50=36.50%, 100=62.78%, 250=0.72% 01:10:25.514 cpu : usr=32.38%, sys=0.80%, ctx=957, majf=0, minf=9 01:10:25.514 IO depths : 1=1.6%, 2=3.5%, 4=11.8%, 8=71.4%, 16=11.7%, 32=0.0%, >=64=0.0% 01:10:25.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.514 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:25.514 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:25.514 latency : target=0, window=0, percentile=100.00%, depth=16 01:10:25.514 01:10:25.514 Run status group 0 (all jobs): 01:10:25.514 READ: bw=27.9MiB/s (29.3MB/s), 1048KiB/s-1441KiB/s (1073kB/s-1476kB/s), io=281MiB (295MB), run=10001-10057msec 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 bdev_null0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 [2024-07-22 11:05:31.666185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 bdev_null1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:25.514 { 01:10:25.514 "params": { 01:10:25.514 "name": "Nvme$subsystem", 01:10:25.514 "trtype": "$TEST_TRANSPORT", 01:10:25.514 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:25.514 "adrfam": "ipv4", 01:10:25.514 "trsvcid": "$NVMF_PORT", 01:10:25.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:25.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:25.514 "hdgst": ${hdgst:-false}, 01:10:25.514 "ddgst": ${ddgst:-false} 01:10:25.514 }, 01:10:25.514 "method": "bdev_nvme_attach_controller" 01:10:25.514 } 01:10:25.514 EOF 01:10:25.514 )") 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:10:25.514 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:25.515 { 01:10:25.515 "params": { 01:10:25.515 "name": "Nvme$subsystem", 01:10:25.515 "trtype": "$TEST_TRANSPORT", 01:10:25.515 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:25.515 "adrfam": "ipv4", 01:10:25.515 "trsvcid": "$NVMF_PORT", 01:10:25.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:25.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:25.515 "hdgst": ${hdgst:-false}, 01:10:25.515 "ddgst": ${ddgst:-false} 01:10:25.515 }, 01:10:25.515 "method": "bdev_nvme_attach_controller" 01:10:25.515 } 01:10:25.515 EOF 01:10:25.515 )") 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:25.515 "params": { 01:10:25.515 "name": "Nvme0", 01:10:25.515 "trtype": "tcp", 01:10:25.515 "traddr": "10.0.0.2", 01:10:25.515 "adrfam": "ipv4", 01:10:25.515 "trsvcid": "4420", 01:10:25.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:25.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:25.515 "hdgst": false, 01:10:25.515 "ddgst": false 01:10:25.515 }, 01:10:25.515 "method": "bdev_nvme_attach_controller" 01:10:25.515 },{ 01:10:25.515 "params": { 01:10:25.515 "name": "Nvme1", 01:10:25.515 "trtype": "tcp", 01:10:25.515 "traddr": "10.0.0.2", 01:10:25.515 "adrfam": "ipv4", 01:10:25.515 "trsvcid": "4420", 01:10:25.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:10:25.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:10:25.515 "hdgst": false, 01:10:25.515 "ddgst": false 01:10:25.515 }, 01:10:25.515 "method": "bdev_nvme_attach_controller" 01:10:25.515 }' 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:10:25.515 11:05:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:25.515 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:10:25.515 ... 01:10:25.515 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:10:25.515 ... 01:10:25.515 fio-3.35 01:10:25.515 Starting 4 threads 01:10:29.703 01:10:29.703 filename0: (groupid=0, jobs=1): err= 0: pid=117530: Mon Jul 22 11:05:37 2024 01:10:29.703 read: IOPS=2531, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5003msec) 01:10:29.703 slat (nsec): min=5873, max=60713, avg=7904.55, stdev=2991.03 01:10:29.703 clat (usec): min=1557, max=9983, avg=3135.68, stdev=352.00 01:10:29.703 lat (usec): min=1566, max=9994, avg=3143.58, stdev=352.22 01:10:29.703 clat percentiles (usec): 01:10:29.703 | 1.00th=[ 2212], 5.00th=[ 2638], 10.00th=[ 3032], 20.00th=[ 3064], 01:10:29.703 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3097], 01:10:29.703 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3556], 01:10:29.703 | 99.00th=[ 4015], 99.50th=[ 4555], 99.90th=[ 8717], 99.95th=[ 9634], 01:10:29.703 | 99.99th=[10028] 01:10:29.703 bw ( KiB/s): min=19760, max=20608, per=25.14%, avg=20441.89, stdev=259.45, samples=9 01:10:29.703 iops : min= 2470, max= 2576, avg=2555.22, stdev=32.43, samples=9 01:10:29.703 lat (msec) : 2=0.07%, 4=98.82%, 10=1.11% 01:10:29.703 cpu : usr=93.06%, sys=6.02%, ctx=4, majf=0, minf=0 01:10:29.703 IO depths : 1=0.1%, 2=0.1%, 4=75.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:29.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 issued rwts: total=12664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:29.703 latency : target=0, window=0, percentile=100.00%, depth=8 01:10:29.703 filename0: (groupid=0, jobs=1): err= 0: pid=117531: Mon Jul 22 11:05:37 2024 01:10:29.703 read: IOPS=2542, BW=19.9MiB/s (20.8MB/s)(99.4MiB/5002msec) 01:10:29.703 slat (nsec): min=5824, max=37290, avg=11436.31, stdev=3784.60 01:10:29.703 clat (usec): min=1374, max=11879, avg=3090.86, stdev=408.35 01:10:29.703 lat (usec): min=1384, max=11892, avg=3102.29, stdev=408.19 01:10:29.703 clat percentiles (usec): 01:10:29.703 | 1.00th=[ 2245], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3032], 01:10:29.703 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 01:10:29.703 | 70.00th=[ 3097], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3261], 01:10:29.703 | 99.00th=[ 4555], 99.50th=[ 5014], 99.90th=[ 8848], 99.95th=[11863], 01:10:29.703 | 99.99th=[11863] 01:10:29.703 bw ( KiB/s): min=20480, max=20736, per=25.28%, avg=20551.11, stdev=92.99, samples=9 01:10:29.703 iops : min= 2560, max= 2592, avg=2568.89, stdev=11.62, samples=9 01:10:29.703 lat (msec) : 2=0.57%, 4=97.48%, 10=1.89%, 20=0.07% 01:10:29.703 cpu : usr=93.00%, sys=6.06%, ctx=5, majf=0, minf=0 01:10:29.703 IO depths : 1=7.9%, 2=25.0%, 4=50.0%, 8=17.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:29.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 issued rwts: total=12720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:29.703 latency : target=0, window=0, percentile=100.00%, depth=8 01:10:29.703 filename1: (groupid=0, jobs=1): err= 0: pid=117532: Mon Jul 22 11:05:37 2024 01:10:29.703 read: IOPS=2548, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5001msec) 01:10:29.703 slat (nsec): min=5848, max=44303, avg=6993.92, stdev=2167.72 01:10:29.703 clat (usec): min=1310, max=9592, avg=3105.28, stdev=243.57 01:10:29.703 lat (usec): min=1320, max=9604, avg=3112.27, stdev=243.42 01:10:29.703 clat percentiles (usec): 01:10:29.703 | 1.00th=[ 2704], 5.00th=[ 2999], 10.00th=[ 2999], 20.00th=[ 3064], 01:10:29.703 | 30.00th=[ 3064], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3097], 01:10:29.703 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3228], 95.00th=[ 3294], 01:10:29.703 | 99.00th=[ 3490], 99.50th=[ 3654], 99.90th=[ 6194], 99.95th=[ 9503], 01:10:29.703 | 99.99th=[ 9634] 01:10:29.703 bw ( KiB/s): min=20480, max=20736, per=25.31%, avg=20579.56, stdev=106.67, samples=9 01:10:29.703 iops : min= 2560, max= 2592, avg=2572.44, stdev=13.33, samples=9 01:10:29.703 lat (msec) : 2=0.24%, 4=99.44%, 10=0.32% 01:10:29.703 cpu : usr=92.82%, sys=6.24%, ctx=13, majf=0, minf=0 01:10:29.703 IO depths : 1=8.5%, 2=25.0%, 4=50.0%, 8=16.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:29.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 issued rwts: total=12744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:29.703 latency : target=0, window=0, percentile=100.00%, depth=8 01:10:29.703 filename1: (groupid=0, jobs=1): err= 0: pid=117533: Mon Jul 22 11:05:37 2024 01:10:29.703 read: IOPS=2541, BW=19.9MiB/s (20.8MB/s)(99.3MiB/5001msec) 01:10:29.703 slat (nsec): min=6004, max=36792, avg=11042.20, stdev=2841.80 01:10:29.703 clat (usec): min=1437, max=11898, avg=3099.54, stdev=349.85 01:10:29.703 lat (usec): min=1448, max=11911, avg=3110.58, stdev=349.65 01:10:29.703 clat percentiles (usec): 01:10:29.703 | 1.00th=[ 2376], 5.00th=[ 2999], 10.00th=[ 2999], 20.00th=[ 3032], 01:10:29.703 | 30.00th=[ 3064], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 01:10:29.703 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3294], 01:10:29.703 | 99.00th=[ 3785], 99.50th=[ 4047], 99.90th=[ 8848], 99.95th=[11863], 01:10:29.703 | 99.99th=[11863] 01:10:29.703 bw ( KiB/s): min=20392, max=20736, per=25.27%, avg=20541.33, stdev=105.22, samples=9 01:10:29.703 iops : min= 2549, max= 2592, avg=2567.67, stdev=13.15, samples=9 01:10:29.703 lat (msec) : 2=0.07%, 4=99.23%, 10=0.63%, 20=0.07% 01:10:29.703 cpu : usr=92.86%, sys=6.22%, ctx=55, majf=0, minf=10 01:10:29.703 IO depths : 1=9.2%, 2=25.0%, 4=50.0%, 8=15.8%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:29.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:29.703 issued rwts: total=12712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:29.703 latency : target=0, window=0, percentile=100.00%, depth=8 01:10:29.703 01:10:29.703 Run status group 0 (all jobs): 01:10:29.703 READ: bw=79.4MiB/s (83.2MB/s), 19.8MiB/s-19.9MiB/s (20.7MB/s-20.9MB/s), io=397MiB (416MB), run=5001-5003msec 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.962 01:10:29.962 real 0m23.622s 01:10:29.962 user 2m6.567s 01:10:29.962 sys 0m5.186s 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:29.962 ************************************ 01:10:29.962 END TEST fio_dif_rand_params 01:10:29.962 11:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:10:29.962 ************************************ 01:10:29.962 11:05:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:10:29.962 11:05:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:10:29.962 11:05:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:10:29.962 11:05:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:29.962 11:05:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:29.962 ************************************ 01:10:29.962 START TEST fio_dif_digest 01:10:29.962 ************************************ 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:10:29.962 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:29.963 bdev_null0 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:29.963 [2024-07-22 11:05:37.884949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:29.963 { 01:10:29.963 "params": { 01:10:29.963 "name": "Nvme$subsystem", 01:10:29.963 "trtype": "$TEST_TRANSPORT", 01:10:29.963 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:29.963 "adrfam": "ipv4", 01:10:29.963 "trsvcid": "$NVMF_PORT", 01:10:29.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:29.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:29.963 "hdgst": ${hdgst:-false}, 01:10:29.963 "ddgst": ${ddgst:-false} 01:10:29.963 }, 01:10:29.963 "method": "bdev_nvme_attach_controller" 01:10:29.963 } 01:10:29.963 EOF 01:10:29.963 )") 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:10:29.963 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:30.222 "params": { 01:10:30.222 "name": "Nvme0", 01:10:30.222 "trtype": "tcp", 01:10:30.222 "traddr": "10.0.0.2", 01:10:30.222 "adrfam": "ipv4", 01:10:30.222 "trsvcid": "4420", 01:10:30.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:30.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:30.222 "hdgst": true, 01:10:30.222 "ddgst": true 01:10:30.222 }, 01:10:30.222 "method": "bdev_nvme_attach_controller" 01:10:30.222 }' 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:10:30.222 11:05:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:30.222 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:10:30.222 ... 01:10:30.222 fio-3.35 01:10:30.222 Starting 3 threads 01:10:42.422 01:10:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=117639: Mon Jul 22 11:05:48 2024 01:10:42.422 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(378MiB/10047msec) 01:10:42.422 slat (nsec): min=6036, max=29105, avg=10441.58, stdev=2323.55 01:10:42.422 clat (usec): min=5632, max=50966, avg=9931.12, stdev=2156.57 01:10:42.422 lat (usec): min=5639, max=50977, avg=9941.56, stdev=2156.61 01:10:42.422 clat percentiles (usec): 01:10:42.422 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9372], 01:10:42.422 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 01:10:42.422 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 01:10:42.422 | 99.00th=[11600], 99.50th=[12125], 99.90th=[50070], 99.95th=[50594], 01:10:42.422 | 99.99th=[51119] 01:10:42.422 bw ( KiB/s): min=36608, max=39936, per=38.13%, avg=38707.20, stdev=1335.65, samples=20 01:10:42.422 iops : min= 286, max= 312, avg=302.40, stdev=10.43, samples=20 01:10:42.422 lat (msec) : 10=67.56%, 20=32.11%, 50=0.20%, 100=0.13% 01:10:42.422 cpu : usr=91.31%, sys=7.53%, ctx=8, majf=0, minf=0 01:10:42.422 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:42.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:42.422 issued rwts: total=3027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:42.422 latency : target=0, window=0, percentile=100.00%, depth=3 01:10:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=117640: Mon Jul 22 11:05:48 2024 01:10:42.422 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(353MiB/10045msec) 01:10:42.422 slat (usec): min=6, max=132, avg=10.46, stdev= 4.59 01:10:42.422 clat (usec): min=5957, max=49054, avg=10653.67, stdev=1368.94 01:10:42.422 lat (usec): min=5963, max=49066, avg=10664.13, stdev=1368.97 01:10:42.422 clat percentiles (usec): 01:10:42.422 | 1.00th=[ 6849], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 01:10:42.422 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 01:10:42.422 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 01:10:42.422 | 99.00th=[12649], 99.50th=[13042], 99.90th=[22938], 99.95th=[45876], 01:10:42.422 | 99.99th=[49021] 01:10:42.422 bw ( KiB/s): min=33792, max=37632, per=35.55%, avg=36083.20, stdev=1187.68, samples=20 01:10:42.422 iops : min= 264, max= 294, avg=281.90, stdev= 9.28, samples=20 01:10:42.422 lat (msec) : 10=19.96%, 20=79.94%, 50=0.11% 01:10:42.422 cpu : usr=90.89%, sys=7.73%, ctx=48, majf=0, minf=0 01:10:42.422 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:42.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:42.422 issued rwts: total=2821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:42.422 latency : target=0, window=0, percentile=100.00%, depth=3 01:10:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=117641: Mon Jul 22 11:05:48 2024 01:10:42.422 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(265MiB/10003msec) 01:10:42.422 slat (nsec): min=6085, max=30848, avg=10852.51, stdev=2217.08 01:10:42.422 clat (usec): min=4494, max=63133, avg=14141.20, stdev=1993.84 01:10:42.422 lat (usec): min=4505, max=63144, avg=14152.05, stdev=1993.92 01:10:42.422 clat percentiles (usec): 01:10:42.422 | 1.00th=[ 9110], 5.00th=[13042], 10.00th=[13304], 20.00th=[13566], 01:10:42.422 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 01:10:42.422 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 01:10:42.422 | 99.00th=[16057], 99.50th=[16319], 99.90th=[56886], 99.95th=[62129], 01:10:42.422 | 99.99th=[63177] 01:10:42.422 bw ( KiB/s): min=24576, max=29184, per=26.69%, avg=27097.60, stdev=1169.53, samples=20 01:10:42.422 iops : min= 192, max= 228, avg=211.70, stdev= 9.14, samples=20 01:10:42.422 lat (msec) : 10=1.27%, 20=98.54%, 50=0.05%, 100=0.14% 01:10:42.422 cpu : usr=92.33%, sys=6.68%, ctx=97, majf=0, minf=0 01:10:42.422 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:42.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:42.422 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:42.422 latency : target=0, window=0, percentile=100.00%, depth=3 01:10:42.422 01:10:42.422 Run status group 0 (all jobs): 01:10:42.422 READ: bw=99.1MiB/s (104MB/s), 26.5MiB/s-37.7MiB/s (27.8MB/s-39.5MB/s), io=996MiB (1044MB), run=10003-10047msec 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:42.422 01:10:42.422 real 0m11.022s 01:10:42.422 user 0m28.191s 01:10:42.422 sys 0m2.518s 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:42.422 11:05:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:10:42.422 ************************************ 01:10:42.422 END TEST fio_dif_digest 01:10:42.422 ************************************ 01:10:42.422 11:05:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:10:42.422 11:05:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:10:42.422 11:05:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:10:42.422 11:05:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:42.422 11:05:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 01:10:42.422 11:05:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:42.422 11:05:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 01:10:42.422 11:05:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:42.422 11:05:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:42.422 rmmod nvme_tcp 01:10:42.422 rmmod nvme_fabrics 01:10:42.422 rmmod nvme_keyring 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@124 -- # set -e 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@125 -- # return 0 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 116871 ']' 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 116871 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 116871 ']' 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 116871 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@953 -- # uname 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116871 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:42.422 killing process with pid 116871 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116871' 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@967 -- # kill 116871 01:10:42.422 11:05:49 nvmf_dif -- common/autotest_common.sh@972 -- # wait 116871 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:10:42.422 11:05:49 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:10:42.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:10:42.423 Waiting for block devices as requested 01:10:42.423 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:10:42.423 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:10:42.423 11:05:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:42.423 11:05:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:42.423 11:05:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:42.423 11:05:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:42.423 11:05:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:42.423 11:05:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:10:42.423 11:05:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:42.423 11:05:50 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:42.423 01:10:42.423 real 1m0.238s 01:10:42.423 user 3m50.174s 01:10:42.423 sys 0m17.693s 01:10:42.423 11:05:50 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:42.423 ************************************ 01:10:42.423 END TEST nvmf_dif 01:10:42.423 ************************************ 01:10:42.423 11:05:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:42.423 11:05:50 -- common/autotest_common.sh@1142 -- # return 0 01:10:42.423 11:05:50 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:10:42.423 11:05:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:10:42.423 11:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:42.423 11:05:50 -- common/autotest_common.sh@10 -- # set +x 01:10:42.423 ************************************ 01:10:42.423 START TEST nvmf_abort_qd_sizes 01:10:42.423 ************************************ 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:10:42.423 * Looking for test storage... 01:10:42.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:42.423 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:42.683 Cannot find device "nvmf_tgt_br" 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:42.683 Cannot find device "nvmf_tgt_br2" 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:42.683 Cannot find device "nvmf_tgt_br" 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:42.683 Cannot find device "nvmf_tgt_br2" 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:42.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:42.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:10:42.683 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:42.684 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:42.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:42.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 01:10:42.943 01:10:42.943 --- 10.0.0.2 ping statistics --- 01:10:42.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:42.943 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:42.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:42.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 01:10:42.943 01:10:42.943 --- 10.0.0.3 ping statistics --- 01:10:42.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:42.943 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:42.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:42.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 01:10:42.943 01:10:42.943 --- 10.0.0.1 ping statistics --- 01:10:42.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:42.943 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:10:42.943 11:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:10:43.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:10:43.880 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:10:43.880 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:10:43.880 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:43.880 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:43.880 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:43.880 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:43.880 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:43.880 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=118239 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 118239 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 118239 ']' 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:44.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:44.156 11:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:10:44.156 [2024-07-22 11:05:51.880337] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:10:44.156 [2024-07-22 11:05:51.880410] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:44.156 [2024-07-22 11:05:51.999212] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:10:44.156 [2024-07-22 11:05:52.019339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:44.156 [2024-07-22 11:05:52.062207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:44.156 [2024-07-22 11:05:52.062263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:44.156 [2024-07-22 11:05:52.062281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:44.156 [2024-07-22 11:05:52.062289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:44.156 [2024-07-22 11:05:52.062295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:44.156 [2024-07-22 11:05:52.062494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:44.156 [2024-07-22 11:05:52.063485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:10:44.156 [2024-07-22 11:05:52.063565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:44.156 [2024-07-22 11:05:52.063566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 01:10:45.095 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:45.096 11:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:10:45.096 ************************************ 01:10:45.096 START TEST spdk_target_abort 01:10:45.096 ************************************ 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:45.096 spdk_targetn1 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:45.096 [2024-07-22 11:05:52.921424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:45.096 [2024-07-22 11:05:52.957536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:10:45.096 11:05:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:10:48.378 Initializing NVMe Controllers 01:10:48.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:10:48.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:10:48.378 Initialization complete. Launching workers. 01:10:48.378 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15132, failed: 0 01:10:48.378 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1144, failed to submit 13988 01:10:48.378 success 735, unsuccess 409, failed 0 01:10:48.378 11:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:10:48.378 11:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:10:51.657 Initializing NVMe Controllers 01:10:51.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:10:51.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:10:51.657 Initialization complete. Launching workers. 01:10:51.657 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5982, failed: 0 01:10:51.657 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 4730 01:10:51.657 success 244, unsuccess 1008, failed 0 01:10:51.657 11:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:10:51.657 11:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:10:54.937 Initializing NVMe Controllers 01:10:54.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:10:54.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:10:54.937 Initialization complete. Launching workers. 01:10:54.937 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33783, failed: 0 01:10:54.937 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2748, failed to submit 31035 01:10:54.937 success 499, unsuccess 2249, failed 0 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:54.937 11:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 118239 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 118239 ']' 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 118239 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118239 01:10:55.501 killing process with pid 118239 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118239' 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 118239 01:10:55.501 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 118239 01:10:55.759 ************************************ 01:10:55.759 END TEST spdk_target_abort 01:10:55.759 ************************************ 01:10:55.759 01:10:55.759 real 0m10.655s 01:10:55.759 user 0m42.982s 01:10:55.759 sys 0m2.227s 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:10:55.759 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 01:10:55.759 11:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:10:55.759 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:10:55.759 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:55.759 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:10:55.759 ************************************ 01:10:55.759 START TEST kernel_target_abort 01:10:55.759 ************************************ 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:10:55.759 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:10:55.760 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:10:56.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:10:56.326 Waiting for block devices as requested 01:10:56.326 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:10:56.585 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:10:56.585 No valid GPT data, bailing 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:10:56.585 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:10:56.844 No valid GPT data, bailing 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:10:56.844 No valid GPT data, bailing 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:10:56.844 No valid GPT data, bailing 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:10:56.844 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 --hostid=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 -a 10.0.0.1 -t tcp -s 4420 01:10:57.103 01:10:57.103 Discovery Log Number of Records 2, Generation counter 2 01:10:57.103 =====Discovery Log Entry 0====== 01:10:57.103 trtype: tcp 01:10:57.103 adrfam: ipv4 01:10:57.103 subtype: current discovery subsystem 01:10:57.103 treq: not specified, sq flow control disable supported 01:10:57.103 portid: 1 01:10:57.103 trsvcid: 4420 01:10:57.103 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:10:57.103 traddr: 10.0.0.1 01:10:57.103 eflags: none 01:10:57.103 sectype: none 01:10:57.103 =====Discovery Log Entry 1====== 01:10:57.103 trtype: tcp 01:10:57.103 adrfam: ipv4 01:10:57.103 subtype: nvme subsystem 01:10:57.103 treq: not specified, sq flow control disable supported 01:10:57.103 portid: 1 01:10:57.103 trsvcid: 4420 01:10:57.103 subnqn: nqn.2016-06.io.spdk:testnqn 01:10:57.103 traddr: 10.0.0.1 01:10:57.103 eflags: none 01:10:57.103 sectype: none 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:10:57.103 11:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:11:00.395 Initializing NVMe Controllers 01:11:00.395 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:11:00.395 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:11:00.395 Initialization complete. Launching workers. 01:11:00.395 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36819, failed: 0 01:11:00.395 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36819, failed to submit 0 01:11:00.395 success 0, unsuccess 36819, failed 0 01:11:00.395 11:06:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:11:00.395 11:06:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:11:03.675 Initializing NVMe Controllers 01:11:03.675 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:11:03.675 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:11:03.675 Initialization complete. Launching workers. 01:11:03.675 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76119, failed: 0 01:11:03.675 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38394, failed to submit 37725 01:11:03.675 success 0, unsuccess 38394, failed 0 01:11:03.675 11:06:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:11:03.675 11:06:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:11:07.006 Initializing NVMe Controllers 01:11:07.006 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:11:07.006 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:11:07.006 Initialization complete. Launching workers. 01:11:07.006 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101601, failed: 0 01:11:07.006 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25422, failed to submit 76179 01:11:07.006 success 0, unsuccess 25422, failed 0 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:11:07.006 11:06:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:11:07.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:09.164 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:11:09.164 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:11:09.424 01:11:09.424 real 0m13.550s 01:11:09.424 user 0m6.218s 01:11:09.424 sys 0m4.694s 01:11:09.424 11:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:09.424 11:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:11:09.424 ************************************ 01:11:09.424 END TEST kernel_target_abort 01:11:09.424 ************************************ 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:11:09.424 rmmod nvme_tcp 01:11:09.424 rmmod nvme_fabrics 01:11:09.424 rmmod nvme_keyring 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 118239 ']' 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 118239 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 118239 ']' 01:11:09.424 11:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 118239 01:11:09.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (118239) - No such process 01:11:09.425 Process with pid 118239 is not found 01:11:09.425 11:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 118239 is not found' 01:11:09.425 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:11:09.425 11:06:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:11:09.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:09.990 Waiting for block devices as requested 01:11:09.990 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:11:10.249 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:11:10.249 01:11:10.249 real 0m28.018s 01:11:10.249 user 0m50.417s 01:11:10.249 sys 0m8.810s 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:10.249 11:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:11:10.249 ************************************ 01:11:10.249 END TEST nvmf_abort_qd_sizes 01:11:10.249 ************************************ 01:11:10.507 11:06:18 -- common/autotest_common.sh@1142 -- # return 0 01:11:10.507 11:06:18 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:11:10.507 11:06:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:11:10.507 11:06:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:11:10.507 11:06:18 -- common/autotest_common.sh@10 -- # set +x 01:11:10.507 ************************************ 01:11:10.507 START TEST keyring_file 01:11:10.507 ************************************ 01:11:10.507 11:06:18 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:11:10.507 * Looking for test storage... 01:11:10.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:10.507 11:06:18 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:10.507 11:06:18 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:10.507 11:06:18 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:10.507 11:06:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.507 11:06:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.507 11:06:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.507 11:06:18 keyring_file -- paths/export.sh@5 -- # export PATH 01:11:10.507 11:06:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@47 -- # : 0 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:11:10.507 11:06:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@17 -- # name=key0 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@17 -- # digest=0 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@18 -- # mktemp 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QApTvWU7so 01:11:10.507 11:06:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:11:10.507 11:06:18 keyring_file -- nvmf/common.sh@705 -- # python - 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QApTvWU7so 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QApTvWU7so 01:11:10.764 11:06:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QApTvWU7so 01:11:10.764 11:06:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@17 -- # name=key1 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@17 -- # digest=0 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@18 -- # mktemp 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YVfqOpk8cs 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:11:10.764 11:06:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:11:10.764 11:06:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:11:10.764 11:06:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:11:10.764 11:06:18 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:11:10.764 11:06:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:11:10.764 11:06:18 keyring_file -- nvmf/common.sh@705 -- # python - 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YVfqOpk8cs 01:11:10.764 11:06:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YVfqOpk8cs 01:11:10.764 11:06:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YVfqOpk8cs 01:11:10.764 11:06:18 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:11:10.764 11:06:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=119126 01:11:10.764 11:06:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 119126 01:11:10.764 11:06:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119126 ']' 01:11:10.764 11:06:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:10.764 11:06:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:10.764 11:06:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:10.764 11:06:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:10.764 11:06:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:11:10.764 [2024-07-22 11:06:18.567293] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:11:10.764 [2024-07-22 11:06:18.567371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119126 ] 01:11:10.764 [2024-07-22 11:06:18.687935] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:11:11.020 [2024-07-22 11:06:18.697733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:11.020 [2024-07-22 11:06:18.740932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:11:11.584 11:06:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:11.584 11:06:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:11:11.584 11:06:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:11:11.584 11:06:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:11.584 11:06:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:11:11.584 [2024-07-22 11:06:19.483022] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:11.584 null0 01:11:11.584 [2024-07-22 11:06:19.514949] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:11.584 [2024-07-22 11:06:19.515155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:11:11.842 [2024-07-22 11:06:19.522931] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:11.842 11:06:19 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:11:11.842 [2024-07-22 11:06:19.538893] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:11:11.842 2024/07/22 11:06:19 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 01:11:11.842 request: 01:11:11.842 { 01:11:11.842 "method": "nvmf_subsystem_add_listener", 01:11:11.842 "params": { 01:11:11.842 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:11:11.842 "secure_channel": false, 01:11:11.842 "listen_address": { 01:11:11.842 "trtype": "tcp", 01:11:11.842 "traddr": "127.0.0.1", 01:11:11.842 "trsvcid": "4420" 01:11:11.842 } 01:11:11.842 } 01:11:11.842 } 01:11:11.842 Got JSON-RPC error response 01:11:11.842 GoRPCClient: error on JSON-RPC call 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:11.842 11:06:19 keyring_file -- keyring/file.sh@46 -- # bperfpid=119160 01:11:11.842 11:06:19 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:11:11.842 11:06:19 keyring_file -- keyring/file.sh@48 -- # waitforlisten 119160 /var/tmp/bperf.sock 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119160 ']' 01:11:11.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:11.842 11:06:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:11:11.842 [2024-07-22 11:06:19.606259] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:11:11.843 [2024-07-22 11:06:19.606390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119160 ] 01:11:11.843 [2024-07-22 11:06:19.725514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:11:11.843 [2024-07-22 11:06:19.750720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:12.100 [2024-07-22 11:06:19.795644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:12.662 11:06:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:12.662 11:06:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:11:12.662 11:06:20 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:12.662 11:06:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:12.920 11:06:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YVfqOpk8cs 01:11:12.920 11:06:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YVfqOpk8cs 01:11:13.178 11:06:20 keyring_file -- keyring/file.sh@51 -- # get_key key0 01:11:13.178 11:06:20 keyring_file -- keyring/file.sh@51 -- # jq -r .path 01:11:13.178 11:06:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:13.178 11:06:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:13.178 11:06:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:13.436 11:06:21 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.QApTvWU7so == \/\t\m\p\/\t\m\p\.\Q\A\p\T\v\W\U\7\s\o ]] 01:11:13.436 11:06:21 keyring_file -- keyring/file.sh@52 -- # get_key key1 01:11:13.436 11:06:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:11:13.436 11:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:13.436 11:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:11:13.436 11:06:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:13.694 11:06:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YVfqOpk8cs == \/\t\m\p\/\t\m\p\.\Y\V\f\q\O\p\k\8\c\s ]] 01:11:13.694 11:06:21 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 01:11:13.694 11:06:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:13.694 11:06:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:13.694 11:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:13.694 11:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:13.694 11:06:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:13.952 11:06:21 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 01:11:13.952 11:06:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 01:11:13.952 11:06:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:13.952 11:06:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:11:13.952 11:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:13.952 11:06:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:13.952 11:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:11:14.211 11:06:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:11:14.211 11:06:21 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:14.211 11:06:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:14.211 [2024-07-22 11:06:22.074557] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:14.470 nvme0n1 01:11:14.470 11:06:22 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:14.470 11:06:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 01:11:14.470 11:06:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:11:14.470 11:06:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:14.729 11:06:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 01:11:14.729 11:06:22 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:14.987 Running I/O for 1 seconds... 01:11:15.919 01:11:15.919 Latency(us) 01:11:15.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:15.919 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:11:15.919 nvme0n1 : 1.00 14902.21 58.21 0.00 0.00 8567.11 4553.30 17055.15 01:11:15.919 =================================================================================================================== 01:11:15.919 Total : 14902.21 58.21 0.00 0.00 8567.11 4553.30 17055.15 01:11:15.919 0 01:11:15.919 11:06:23 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:11:15.919 11:06:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:11:16.177 11:06:23 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 01:11:16.177 11:06:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:16.177 11:06:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:16.177 11:06:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:16.177 11:06:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:16.177 11:06:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:16.435 11:06:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 01:11:16.435 11:06:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 01:11:16.435 11:06:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:16.435 11:06:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:11:16.435 11:06:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:16.435 11:06:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:16.435 11:06:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:11:16.435 11:06:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:11:16.693 11:06:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:11:16.694 11:06:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:11:16.694 [2024-07-22 11:06:24.560406] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:11:16.694 [2024-07-22 11:06:24.561139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf551f0 (107): Transport endpoint is not connected 01:11:16.694 [2024-07-22 11:06:24.562126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf551f0 (9): Bad file descriptor 01:11:16.694 [2024-07-22 11:06:24.563122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:11:16.694 [2024-07-22 11:06:24.563144] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:11:16.694 [2024-07-22 11:06:24.563154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:11:16.694 2024/07/22 11:06:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:11:16.694 request: 01:11:16.694 { 01:11:16.694 "method": "bdev_nvme_attach_controller", 01:11:16.694 "params": { 01:11:16.694 "name": "nvme0", 01:11:16.694 "trtype": "tcp", 01:11:16.694 "traddr": "127.0.0.1", 01:11:16.694 "adrfam": "ipv4", 01:11:16.694 "trsvcid": "4420", 01:11:16.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:16.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:16.694 "prchk_reftag": false, 01:11:16.694 "prchk_guard": false, 01:11:16.694 "hdgst": false, 01:11:16.694 "ddgst": false, 01:11:16.694 "psk": "key1" 01:11:16.694 } 01:11:16.694 } 01:11:16.694 Got JSON-RPC error response 01:11:16.694 GoRPCClient: error on JSON-RPC call 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:16.694 11:06:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:16.694 11:06:24 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 01:11:16.694 11:06:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:16.694 11:06:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:16.694 11:06:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:16.694 11:06:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:16.694 11:06:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:16.957 11:06:24 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 01:11:16.957 11:06:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 01:11:16.957 11:06:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:11:16.957 11:06:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:16.957 11:06:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:16.957 11:06:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:16.957 11:06:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:11:17.221 11:06:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:11:17.221 11:06:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 01:11:17.221 11:06:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:11:17.480 11:06:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 01:11:17.480 11:06:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:11:17.480 11:06:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 01:11:17.480 11:06:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:17.480 11:06:25 keyring_file -- keyring/file.sh@77 -- # jq length 01:11:17.754 11:06:25 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 01:11:17.754 11:06:25 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.QApTvWU7so 01:11:17.754 11:06:25 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:17.754 11:06:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:11:17.754 11:06:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:17.754 11:06:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:11:17.754 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:17.754 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:11:17.754 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:17.755 11:06:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:17.755 11:06:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:18.025 [2024-07-22 11:06:25.770669] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QApTvWU7so': 0100660 01:11:18.025 [2024-07-22 11:06:25.770726] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:11:18.025 2024/07/22 11:06:25 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.QApTvWU7so], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:11:18.025 request: 01:11:18.025 { 01:11:18.025 "method": "keyring_file_add_key", 01:11:18.025 "params": { 01:11:18.025 "name": "key0", 01:11:18.025 "path": "/tmp/tmp.QApTvWU7so" 01:11:18.025 } 01:11:18.025 } 01:11:18.025 Got JSON-RPC error response 01:11:18.025 GoRPCClient: error on JSON-RPC call 01:11:18.025 11:06:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:11:18.025 11:06:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:18.025 11:06:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:18.025 11:06:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:18.025 11:06:25 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.QApTvWU7so 01:11:18.025 11:06:25 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:18.025 11:06:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QApTvWU7so 01:11:18.284 11:06:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.QApTvWU7so 01:11:18.284 11:06:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 01:11:18.284 11:06:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:18.284 11:06:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:18.284 11:06:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:18.284 11:06:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:18.284 11:06:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:18.544 11:06:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 01:11:18.544 11:06:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:18.544 11:06:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:18.544 [2024-07-22 11:06:26.413708] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QApTvWU7so': No such file or directory 01:11:18.544 [2024-07-22 11:06:26.413750] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:11:18.544 [2024-07-22 11:06:26.413773] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:11:18.544 [2024-07-22 11:06:26.413781] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:11:18.544 [2024-07-22 11:06:26.413789] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:11:18.544 2024/07/22 11:06:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 01:11:18.544 request: 01:11:18.544 { 01:11:18.544 "method": "bdev_nvme_attach_controller", 01:11:18.544 "params": { 01:11:18.544 "name": "nvme0", 01:11:18.544 "trtype": "tcp", 01:11:18.544 "traddr": "127.0.0.1", 01:11:18.544 "adrfam": "ipv4", 01:11:18.544 "trsvcid": "4420", 01:11:18.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:18.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:18.544 "prchk_reftag": false, 01:11:18.544 "prchk_guard": false, 01:11:18.544 "hdgst": false, 01:11:18.544 "ddgst": false, 01:11:18.544 "psk": "key0" 01:11:18.544 } 01:11:18.544 } 01:11:18.544 Got JSON-RPC error response 01:11:18.544 GoRPCClient: error on JSON-RPC call 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:18.544 11:06:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:18.544 11:06:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 01:11:18.544 11:06:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:11:18.802 11:06:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@17 -- # name=key0 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@17 -- # digest=0 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@18 -- # mktemp 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jsO8MVzCjw 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:11:18.802 11:06:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:11:18.802 11:06:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:11:18.802 11:06:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:11:18.802 11:06:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:11:18.802 11:06:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:11:18.802 11:06:26 keyring_file -- nvmf/common.sh@705 -- # python - 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jsO8MVzCjw 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jsO8MVzCjw 01:11:18.802 11:06:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.jsO8MVzCjw 01:11:18.802 11:06:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jsO8MVzCjw 01:11:18.802 11:06:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jsO8MVzCjw 01:11:19.061 11:06:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:19.061 11:06:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:19.319 nvme0n1 01:11:19.319 11:06:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 01:11:19.319 11:06:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:19.319 11:06:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:19.319 11:06:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:19.319 11:06:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:19.319 11:06:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:19.578 11:06:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 01:11:19.578 11:06:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 01:11:19.578 11:06:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:11:19.837 11:06:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 01:11:19.837 11:06:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 01:11:19.837 11:06:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:19.837 11:06:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:19.837 11:06:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:20.095 11:06:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 01:11:20.095 11:06:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 01:11:20.095 11:06:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:20.095 11:06:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:20.095 11:06:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:20.095 11:06:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:20.095 11:06:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:20.354 11:06:28 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 01:11:20.354 11:06:28 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:11:20.354 11:06:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:11:20.613 11:06:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 01:11:20.613 11:06:28 keyring_file -- keyring/file.sh@104 -- # jq length 01:11:20.613 11:06:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:20.872 11:06:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 01:11:20.872 11:06:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jsO8MVzCjw 01:11:20.872 11:06:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jsO8MVzCjw 01:11:21.131 11:06:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YVfqOpk8cs 01:11:21.131 11:06:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YVfqOpk8cs 01:11:21.389 11:06:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:21.389 11:06:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:11:21.389 nvme0n1 01:11:21.647 11:06:29 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 01:11:21.647 11:06:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:11:21.906 11:06:29 keyring_file -- keyring/file.sh@112 -- # config='{ 01:11:21.906 "subsystems": [ 01:11:21.906 { 01:11:21.906 "subsystem": "keyring", 01:11:21.906 "config": [ 01:11:21.906 { 01:11:21.906 "method": "keyring_file_add_key", 01:11:21.906 "params": { 01:11:21.906 "name": "key0", 01:11:21.906 "path": "/tmp/tmp.jsO8MVzCjw" 01:11:21.906 } 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "method": "keyring_file_add_key", 01:11:21.906 "params": { 01:11:21.906 "name": "key1", 01:11:21.906 "path": "/tmp/tmp.YVfqOpk8cs" 01:11:21.906 } 01:11:21.906 } 01:11:21.906 ] 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "subsystem": "iobuf", 01:11:21.906 "config": [ 01:11:21.906 { 01:11:21.906 "method": "iobuf_set_options", 01:11:21.906 "params": { 01:11:21.906 "large_bufsize": 135168, 01:11:21.906 "large_pool_count": 1024, 01:11:21.906 "small_bufsize": 8192, 01:11:21.906 "small_pool_count": 8192 01:11:21.906 } 01:11:21.906 } 01:11:21.906 ] 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "subsystem": "sock", 01:11:21.906 "config": [ 01:11:21.906 { 01:11:21.906 "method": "sock_set_default_impl", 01:11:21.906 "params": { 01:11:21.906 "impl_name": "posix" 01:11:21.906 } 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "method": "sock_impl_set_options", 01:11:21.906 "params": { 01:11:21.906 "enable_ktls": false, 01:11:21.906 "enable_placement_id": 0, 01:11:21.906 "enable_quickack": false, 01:11:21.906 "enable_recv_pipe": true, 01:11:21.906 "enable_zerocopy_send_client": false, 01:11:21.906 "enable_zerocopy_send_server": true, 01:11:21.906 "impl_name": "ssl", 01:11:21.906 "recv_buf_size": 4096, 01:11:21.906 "send_buf_size": 4096, 01:11:21.906 "tls_version": 0, 01:11:21.906 "zerocopy_threshold": 0 01:11:21.906 } 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "method": "sock_impl_set_options", 01:11:21.906 "params": { 01:11:21.906 "enable_ktls": false, 01:11:21.906 "enable_placement_id": 0, 01:11:21.906 "enable_quickack": false, 01:11:21.906 "enable_recv_pipe": true, 01:11:21.906 "enable_zerocopy_send_client": false, 01:11:21.906 "enable_zerocopy_send_server": true, 01:11:21.906 "impl_name": "posix", 01:11:21.906 "recv_buf_size": 2097152, 01:11:21.906 "send_buf_size": 2097152, 01:11:21.906 "tls_version": 0, 01:11:21.906 "zerocopy_threshold": 0 01:11:21.906 } 01:11:21.906 } 01:11:21.906 ] 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "subsystem": "vmd", 01:11:21.906 "config": [] 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "subsystem": "accel", 01:11:21.906 "config": [ 01:11:21.906 { 01:11:21.906 "method": "accel_set_options", 01:11:21.906 "params": { 01:11:21.906 "buf_count": 2048, 01:11:21.906 "large_cache_size": 16, 01:11:21.906 "sequence_count": 2048, 01:11:21.906 "small_cache_size": 128, 01:11:21.906 "task_count": 2048 01:11:21.906 } 01:11:21.906 } 01:11:21.906 ] 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "subsystem": "bdev", 01:11:21.906 "config": [ 01:11:21.906 { 01:11:21.906 "method": "bdev_set_options", 01:11:21.906 "params": { 01:11:21.906 "bdev_auto_examine": true, 01:11:21.906 "bdev_io_cache_size": 256, 01:11:21.906 "bdev_io_pool_size": 65535, 01:11:21.906 "iobuf_large_cache_size": 16, 01:11:21.906 "iobuf_small_cache_size": 128 01:11:21.906 } 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "method": "bdev_raid_set_options", 01:11:21.906 "params": { 01:11:21.906 "process_max_bandwidth_mb_sec": 0, 01:11:21.906 "process_window_size_kb": 1024 01:11:21.906 } 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "method": "bdev_iscsi_set_options", 01:11:21.906 "params": { 01:11:21.906 "timeout_sec": 30 01:11:21.906 } 01:11:21.906 }, 01:11:21.906 { 01:11:21.906 "method": "bdev_nvme_set_options", 01:11:21.906 "params": { 01:11:21.906 "action_on_timeout": "none", 01:11:21.906 "allow_accel_sequence": false, 01:11:21.906 "arbitration_burst": 0, 01:11:21.906 "bdev_retry_count": 3, 01:11:21.906 "ctrlr_loss_timeout_sec": 0, 01:11:21.906 "delay_cmd_submit": true, 01:11:21.906 "dhchap_dhgroups": [ 01:11:21.906 "null", 01:11:21.906 "ffdhe2048", 01:11:21.906 "ffdhe3072", 01:11:21.906 "ffdhe4096", 01:11:21.906 "ffdhe6144", 01:11:21.906 "ffdhe8192" 01:11:21.906 ], 01:11:21.906 "dhchap_digests": [ 01:11:21.906 "sha256", 01:11:21.906 "sha384", 01:11:21.906 "sha512" 01:11:21.906 ], 01:11:21.906 "disable_auto_failback": false, 01:11:21.906 "fast_io_fail_timeout_sec": 0, 01:11:21.907 "generate_uuids": false, 01:11:21.907 "high_priority_weight": 0, 01:11:21.907 "io_path_stat": false, 01:11:21.907 "io_queue_requests": 512, 01:11:21.907 "keep_alive_timeout_ms": 10000, 01:11:21.907 "low_priority_weight": 0, 01:11:21.907 "medium_priority_weight": 0, 01:11:21.907 "nvme_adminq_poll_period_us": 10000, 01:11:21.907 "nvme_error_stat": false, 01:11:21.907 "nvme_ioq_poll_period_us": 0, 01:11:21.907 "rdma_cm_event_timeout_ms": 0, 01:11:21.907 "rdma_max_cq_size": 0, 01:11:21.907 "rdma_srq_size": 0, 01:11:21.907 "reconnect_delay_sec": 0, 01:11:21.907 "timeout_admin_us": 0, 01:11:21.907 "timeout_us": 0, 01:11:21.907 "transport_ack_timeout": 0, 01:11:21.907 "transport_retry_count": 4, 01:11:21.907 "transport_tos": 0 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "bdev_nvme_attach_controller", 01:11:21.907 "params": { 01:11:21.907 "adrfam": "IPv4", 01:11:21.907 "ctrlr_loss_timeout_sec": 0, 01:11:21.907 "ddgst": false, 01:11:21.907 "fast_io_fail_timeout_sec": 0, 01:11:21.907 "hdgst": false, 01:11:21.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:21.907 "name": "nvme0", 01:11:21.907 "prchk_guard": false, 01:11:21.907 "prchk_reftag": false, 01:11:21.907 "psk": "key0", 01:11:21.907 "reconnect_delay_sec": 0, 01:11:21.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:21.907 "traddr": "127.0.0.1", 01:11:21.907 "trsvcid": "4420", 01:11:21.907 "trtype": "TCP" 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "bdev_nvme_set_hotplug", 01:11:21.907 "params": { 01:11:21.907 "enable": false, 01:11:21.907 "period_us": 100000 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "bdev_wait_for_examine" 01:11:21.907 } 01:11:21.907 ] 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "subsystem": "nbd", 01:11:21.907 "config": [] 01:11:21.907 } 01:11:21.907 ] 01:11:21.907 }' 01:11:21.907 11:06:29 keyring_file -- keyring/file.sh@114 -- # killprocess 119160 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119160 ']' 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119160 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@953 -- # uname 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119160 01:11:21.907 killing process with pid 119160 01:11:21.907 Received shutdown signal, test time was about 1.000000 seconds 01:11:21.907 01:11:21.907 Latency(us) 01:11:21.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:21.907 =================================================================================================================== 01:11:21.907 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119160' 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@967 -- # kill 119160 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@972 -- # wait 119160 01:11:21.907 11:06:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=119617 01:11:21.907 11:06:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 119617 /var/tmp/bperf.sock 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119617 ']' 01:11:21.907 11:06:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:21.907 11:06:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 01:11:21.907 "subsystems": [ 01:11:21.907 { 01:11:21.907 "subsystem": "keyring", 01:11:21.907 "config": [ 01:11:21.907 { 01:11:21.907 "method": "keyring_file_add_key", 01:11:21.907 "params": { 01:11:21.907 "name": "key0", 01:11:21.907 "path": "/tmp/tmp.jsO8MVzCjw" 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "keyring_file_add_key", 01:11:21.907 "params": { 01:11:21.907 "name": "key1", 01:11:21.907 "path": "/tmp/tmp.YVfqOpk8cs" 01:11:21.907 } 01:11:21.907 } 01:11:21.907 ] 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "subsystem": "iobuf", 01:11:21.907 "config": [ 01:11:21.907 { 01:11:21.907 "method": "iobuf_set_options", 01:11:21.907 "params": { 01:11:21.907 "large_bufsize": 135168, 01:11:21.907 "large_pool_count": 1024, 01:11:21.907 "small_bufsize": 8192, 01:11:21.907 "small_pool_count": 8192 01:11:21.907 } 01:11:21.907 } 01:11:21.907 ] 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "subsystem": "sock", 01:11:21.907 "config": [ 01:11:21.907 { 01:11:21.907 "method": "sock_set_default_impl", 01:11:21.907 "params": { 01:11:21.907 "impl_name": "posix" 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "sock_impl_set_options", 01:11:21.907 "params": { 01:11:21.907 "enable_ktls": false, 01:11:21.907 "enable_placement_id": 0, 01:11:21.907 "enable_quickack": false, 01:11:21.907 "enable_recv_pipe": true, 01:11:21.907 "enable_zerocopy_send_client": false, 01:11:21.907 "enable_zerocopy_send_server": true, 01:11:21.907 "impl_name": "ssl", 01:11:21.907 "recv_buf_size": 4096, 01:11:21.907 "send_buf_size": 4096, 01:11:21.907 "tls_version": 0, 01:11:21.907 "zerocopy_threshold": 0 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "sock_impl_set_options", 01:11:21.907 "params": { 01:11:21.907 "enable_ktls": false, 01:11:21.907 "enable_placement_id": 0, 01:11:21.907 "enable_quickack": false, 01:11:21.907 "enable_recv_pipe": true, 01:11:21.907 "enable_zerocopy_send_client": false, 01:11:21.907 "enable_zerocopy_send_server": true, 01:11:21.907 "impl_name": "posix", 01:11:21.907 "recv_buf_size": 2097152, 01:11:21.907 "send_buf_size": 2097152, 01:11:21.907 "tls_version": 0, 01:11:21.907 "zerocopy_threshold": 0 01:11:21.907 } 01:11:21.907 } 01:11:21.907 ] 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "subsystem": "vmd", 01:11:21.907 "config": [] 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "subsystem": "accel", 01:11:21.907 "config": [ 01:11:21.907 { 01:11:21.907 "method": "accel_set_options", 01:11:21.907 "params": { 01:11:21.907 "buf_count": 2048, 01:11:21.907 "large_cache_size": 16, 01:11:21.907 "sequence_count": 2048, 01:11:21.907 "small_cache_size": 128, 01:11:21.907 "task_count": 2048 01:11:21.907 } 01:11:21.907 } 01:11:21.907 ] 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "subsystem": "bdev", 01:11:21.907 "config": [ 01:11:21.907 { 01:11:21.907 "method": "bdev_set_options", 01:11:21.907 "params": { 01:11:21.907 "bdev_auto_examine": true, 01:11:21.907 "bdev_io_cache_size": 256, 01:11:21.907 "bdev_io_pool_size": 65535, 01:11:21.907 "iobuf_large_cache_size": 16, 01:11:21.907 "iobuf_small_cache_size": 128 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "bdev_raid_set_options", 01:11:21.907 "params": { 01:11:21.907 "process_max_bandwidth_mb_sec": 0, 01:11:21.907 "process_window_size_kb": 1024 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "bdev_iscsi_set_options", 01:11:21.907 "params": { 01:11:21.907 "timeout_sec": 30 01:11:21.907 } 01:11:21.907 }, 01:11:21.907 { 01:11:21.907 "method": "bdev_nvme_set_options", 01:11:21.907 "params": { 01:11:21.907 "action_on_timeout": "none", 01:11:21.907 "allow_accel_sequence": false, 01:11:21.907 "arbitration_burst": 0, 01:11:21.907 "bdev_retry_count": 3, 01:11:21.907 "ctrlr_loss_timeout_sec": 0, 01:11:21.907 "delay_cmd_submit": true, 01:11:21.907 "dhchap_dhgroups": [ 01:11:21.907 "null", 01:11:21.907 "ffdhe2048", 01:11:21.907 "ffdhe3072", 01:11:21.907 "ffdhe4096", 01:11:21.907 "ffdhe6144", 01:11:21.907 "ffdhe8192" 01:11:21.907 ], 01:11:21.907 "dhchap_digests": [ 01:11:21.907 "sha256", 01:11:21.907 "sha384", 01:11:21.907 "sha512" 01:11:21.907 ], 01:11:21.907 "disable_auto_failback": false, 01:11:21.907 "fast_io_fail_timeout_sec": 0, 01:11:21.907 "generate_uuids": false, 01:11:21.907 "high_priority_weight": 0, 01:11:21.907 "io_path_stat": false, 01:11:21.907 "io_queue_requests": 512, 01:11:21.907 "keep_alive_timeout_ms": 10000, 01:11:21.907 "low_priority_weight": 0, 01:11:21.907 "medium_priority_weight": 0, 01:11:21.908 "nvme_adminq_poll_period_us": 10000, 01:11:21.908 "nvme_error_stat": false, 01:11:21.908 "nvme_ioq_poll_period_us": 0, 01:11:21.908 "rdma_cm_event_timeout_ms": 0, 01:11:21.908 "rdma_max_cq_size": 0, 01:11:21.908 "rdma_srq_size": 0, 01:11:21.908 "reconnect_delay_sec": 0, 01:11:21.908 "timeout_admin_us": 0, 01:11:21.908 "timeout_us": 0, 01:11:21.908 "transport_ack_timeout": 0, 01:11:21.908 "transport_retry_count": 4, 01:11:21.908 "transport_tos": 0 01:11:21.908 } 01:11:21.908 }, 01:11:21.908 { 01:11:21.908 "method": "bdev_nvme_attach_controller", 01:11:21.908 "params": { 01:11:21.908 "adrfam": "IPv4", 01:11:21.908 "ctrlr_loss_timeout_sec": 0, 01:11:21.908 "ddgst": false, 01:11:21.908 "fast_io_fail_timeout_sec": 0, 01:11:21.908 "hdgst": false, 01:11:21.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:21.908 "name": "nvme0", 01:11:21.908 "prchk_guard": false, 01:11:21.908 "prchk_reftag": false, 01:11:21.908 "psk": "key0", 01:11:21.908 "reconnect_delay_sec": 0, 01:11:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:21.908 "traddr": "127.0.0.1", 01:11:21.908 "trsvcid": "4420", 01:11:21.908 "trtype": "TCP" 01:11:21.908 } 01:11:21.908 }, 01:11:21.908 { 01:11:21.908 "method": "bdev_nvme_set_hotplug", 01:11:21.908 "params": { 01:11:21.908 "enable": false, 01:11:21.908 "period_us": 100000 01:11:21.908 } 01:11:21.908 }, 01:11:21.908 { 01:11:21.908 "method": "bdev_wait_for_examine" 01:11:21.908 } 01:11:21.908 ] 01:11:21.908 }, 01:11:21.908 { 01:11:21.908 "subsystem": "nbd", 01:11:21.908 "config": [] 01:11:21.908 } 01:11:21.908 ] 01:11:21.908 }' 01:11:21.908 11:06:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:21.908 11:06:29 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:11:21.908 11:06:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:21.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:21.908 11:06:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:21.908 11:06:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:11:22.180 [2024-07-22 11:06:29.874281] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:11:22.181 [2024-07-22 11:06:29.874355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119617 ] 01:11:22.181 [2024-07-22 11:06:29.992805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:11:22.181 [2024-07-22 11:06:30.017041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:22.181 [2024-07-22 11:06:30.062513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:22.439 [2024-07-22 11:06:30.217706] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:23.004 11:06:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:23.004 11:06:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:11:23.004 11:06:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 01:11:23.004 11:06:30 keyring_file -- keyring/file.sh@120 -- # jq length 01:11:23.004 11:06:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:23.261 11:06:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 01:11:23.261 11:06:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 01:11:23.261 11:06:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:11:23.261 11:06:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:23.261 11:06:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:23.261 11:06:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:23.261 11:06:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:11:23.261 11:06:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:11:23.261 11:06:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 01:11:23.261 11:06:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:11:23.261 11:06:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:11:23.261 11:06:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:23.261 11:06:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:11:23.261 11:06:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:23.519 11:06:31 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 01:11:23.519 11:06:31 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 01:11:23.519 11:06:31 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 01:11:23.519 11:06:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:11:23.778 11:06:31 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 01:11:23.778 11:06:31 keyring_file -- keyring/file.sh@1 -- # cleanup 01:11:23.778 11:06:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jsO8MVzCjw /tmp/tmp.YVfqOpk8cs 01:11:23.778 11:06:31 keyring_file -- keyring/file.sh@20 -- # killprocess 119617 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119617 ']' 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119617 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@953 -- # uname 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119617 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:23.778 killing process with pid 119617 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119617' 01:11:23.778 Received shutdown signal, test time was about 1.000000 seconds 01:11:23.778 01:11:23.778 Latency(us) 01:11:23.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:23.778 =================================================================================================================== 01:11:23.778 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@967 -- # kill 119617 01:11:23.778 11:06:31 keyring_file -- common/autotest_common.sh@972 -- # wait 119617 01:11:24.038 11:06:31 keyring_file -- keyring/file.sh@21 -- # killprocess 119126 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119126 ']' 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119126 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@953 -- # uname 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119126 01:11:24.038 killing process with pid 119126 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119126' 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@967 -- # kill 119126 01:11:24.038 [2024-07-22 11:06:31.838683] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:11:24.038 11:06:31 keyring_file -- common/autotest_common.sh@972 -- # wait 119126 01:11:24.395 01:11:24.396 real 0m13.896s 01:11:24.396 user 0m33.299s 01:11:24.396 sys 0m3.568s 01:11:24.396 11:06:32 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:24.396 11:06:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:11:24.396 ************************************ 01:11:24.396 END TEST keyring_file 01:11:24.396 ************************************ 01:11:24.396 11:06:32 -- common/autotest_common.sh@1142 -- # return 0 01:11:24.396 11:06:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 01:11:24.396 11:06:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:11:24.396 11:06:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:11:24.396 11:06:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:11:24.396 11:06:32 -- common/autotest_common.sh@10 -- # set +x 01:11:24.396 ************************************ 01:11:24.396 START TEST keyring_linux 01:11:24.396 ************************************ 01:11:24.396 11:06:32 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:11:24.655 * Looking for test storage... 01:11:24.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:11:24.655 11:06:32 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:11:24.655 11:06:32 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5d34c6e8-e0d0-4af9-bf7d-a44089ca79f7 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:24.655 11:06:32 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:24.655 11:06:32 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:24.655 11:06:32 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:24.655 11:06:32 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:24.655 11:06:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:24.655 11:06:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:24.656 11:06:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:24.656 11:06:32 keyring_linux -- paths/export.sh@5 -- # export PATH 01:11:24.656 11:06:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@47 -- # : 0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@705 -- # python - 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:11:24.656 /tmp/:spdk-test:key0 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:11:24.656 11:06:32 keyring_linux -- nvmf/common.sh@705 -- # python - 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:11:24.656 /tmp/:spdk-test:key1 01:11:24.656 11:06:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=119766 01:11:24.656 11:06:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 119766 01:11:24.656 11:06:32 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119766 ']' 01:11:24.656 11:06:32 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:24.656 11:06:32 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:24.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:24.656 11:06:32 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:24.656 11:06:32 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:24.656 11:06:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:11:24.656 [2024-07-22 11:06:32.519505] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:11:24.656 [2024-07-22 11:06:32.519579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119766 ] 01:11:24.915 [2024-07-22 11:06:32.636815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:11:24.915 [2024-07-22 11:06:32.662094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:24.915 [2024-07-22 11:06:32.706428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:11:25.480 11:06:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:25.480 11:06:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 01:11:25.480 11:06:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:11:25.480 11:06:33 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:25.480 11:06:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:11:25.480 [2024-07-22 11:06:33.393923] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:25.739 null0 01:11:25.739 [2024-07-22 11:06:33.425890] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:25.739 [2024-07-22 11:06:33.426082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:25.739 11:06:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:11:25.739 525920546 01:11:25.739 11:06:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:11:25.739 178402436 01:11:25.739 11:06:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=119802 01:11:25.739 11:06:33 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:11:25.739 11:06:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 119802 /var/tmp/bperf.sock 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119802 ']' 01:11:25.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:25.739 11:06:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:11:25.739 [2024-07-22 11:06:33.507675] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 01:11:25.739 [2024-07-22 11:06:33.507740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119802 ] 01:11:25.739 [2024-07-22 11:06:33.624789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 01:11:25.739 [2024-07-22 11:06:33.648538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:25.997 [2024-07-22 11:06:33.693784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:26.562 11:06:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:26.562 11:06:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 01:11:26.562 11:06:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:11:26.562 11:06:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:11:26.823 11:06:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:11:26.823 11:06:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:11:27.082 11:06:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:11:27.082 11:06:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:11:27.082 [2024-07-22 11:06:34.989275] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:27.342 nvme0n1 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:11:27.342 11:06:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:11:27.342 11:06:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:11:27.601 11:06:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:11:27.601 11:06:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:27.601 11:06:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@25 -- # sn=525920546 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 525920546 == \5\2\5\9\2\0\5\4\6 ]] 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 525920546 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:11:27.601 11:06:35 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:27.860 Running I/O for 1 seconds... 01:11:28.796 01:11:28.796 Latency(us) 01:11:28.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:28.796 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:11:28.796 nvme0n1 : 1.01 18256.97 71.32 0.00 0.00 6981.97 5737.69 14528.46 01:11:28.796 =================================================================================================================== 01:11:28.796 Total : 18256.97 71.32 0.00 0.00 6981.97 5737.69 14528.46 01:11:28.796 0 01:11:28.797 11:06:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:11:28.797 11:06:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:11:29.055 11:06:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:11:29.055 11:06:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:11:29.055 11:06:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:11:29.055 11:06:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:11:29.055 11:06:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:11:29.055 11:06:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:11:29.313 11:06:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:11:29.313 11:06:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:11:29.313 11:06:37 keyring_linux -- keyring/linux.sh@23 -- # return 01:11:29.313 11:06:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:29.313 11:06:37 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:11:29.313 11:06:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:11:29.313 [2024-07-22 11:06:37.221442] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:11:29.313 [2024-07-22 11:06:37.222044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17291b0 (107): Transport endpoint is not connected 01:11:29.313 [2024-07-22 11:06:37.223032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17291b0 (9): Bad file descriptor 01:11:29.313 [2024-07-22 11:06:37.224028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:11:29.313 [2024-07-22 11:06:37.224050] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:11:29.313 [2024-07-22 11:06:37.224058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:11:29.313 2024/07/22 11:06:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:11:29.313 request: 01:11:29.313 { 01:11:29.313 "method": "bdev_nvme_attach_controller", 01:11:29.313 "params": { 01:11:29.313 "name": "nvme0", 01:11:29.313 "trtype": "tcp", 01:11:29.313 "traddr": "127.0.0.1", 01:11:29.313 "adrfam": "ipv4", 01:11:29.313 "trsvcid": "4420", 01:11:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:29.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:29.313 "prchk_reftag": false, 01:11:29.313 "prchk_guard": false, 01:11:29.313 "hdgst": false, 01:11:29.313 "ddgst": false, 01:11:29.313 "psk": ":spdk-test:key1" 01:11:29.313 } 01:11:29.313 } 01:11:29.313 Got JSON-RPC error response 01:11:29.313 GoRPCClient: error on JSON-RPC call 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@651 -- # es=1 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@33 -- # sn=525920546 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 525920546 01:11:29.571 1 links removed 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@33 -- # sn=178402436 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 178402436 01:11:29.571 1 links removed 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 119802 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119802 ']' 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119802 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119802 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:29.571 killing process with pid 119802 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119802' 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 119802 01:11:29.571 Received shutdown signal, test time was about 1.000000 seconds 01:11:29.571 01:11:29.571 Latency(us) 01:11:29.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:29.571 =================================================================================================================== 01:11:29.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 119802 01:11:29.571 11:06:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 119766 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119766 ']' 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119766 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:29.571 11:06:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119766 01:11:29.830 11:06:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:11:29.830 11:06:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:11:29.830 killing process with pid 119766 01:11:29.830 11:06:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119766' 01:11:29.830 11:06:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 119766 01:11:29.830 11:06:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 119766 01:11:30.089 ************************************ 01:11:30.089 END TEST keyring_linux 01:11:30.089 ************************************ 01:11:30.089 01:11:30.089 real 0m5.625s 01:11:30.089 user 0m10.093s 01:11:30.089 sys 0m1.823s 01:11:30.089 11:06:37 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:30.089 11:06:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:11:30.089 11:06:37 -- common/autotest_common.sh@1142 -- # return 0 01:11:30.089 11:06:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 01:11:30.089 11:06:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 01:11:30.089 11:06:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 01:11:30.089 11:06:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 01:11:30.089 11:06:37 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 01:11:30.089 11:06:37 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 01:11:30.089 11:06:37 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 01:11:30.089 11:06:37 -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:30.089 11:06:37 -- common/autotest_common.sh@10 -- # set +x 01:11:30.089 11:06:37 -- spdk/autotest.sh@383 -- # autotest_cleanup 01:11:30.089 11:06:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 01:11:30.089 11:06:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 01:11:30.089 11:06:37 -- common/autotest_common.sh@10 -- # set +x 01:11:32.643 INFO: APP EXITING 01:11:32.643 INFO: killing all VMs 01:11:32.643 INFO: killing vhost app 01:11:32.643 INFO: EXIT DONE 01:11:33.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:33.209 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:11:33.209 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:11:34.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:34.145 Cleaning 01:11:34.145 Removing: /var/run/dpdk/spdk0/config 01:11:34.145 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:11:34.145 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:11:34.145 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:11:34.145 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:11:34.145 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:11:34.145 Removing: /var/run/dpdk/spdk0/hugepage_info 01:11:34.145 Removing: /var/run/dpdk/spdk1/config 01:11:34.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:11:34.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:11:34.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:11:34.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:11:34.145 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:11:34.145 Removing: /var/run/dpdk/spdk1/hugepage_info 01:11:34.145 Removing: /var/run/dpdk/spdk2/config 01:11:34.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:11:34.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:11:34.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:11:34.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:11:34.145 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:11:34.145 Removing: /var/run/dpdk/spdk2/hugepage_info 01:11:34.145 Removing: /var/run/dpdk/spdk3/config 01:11:34.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:11:34.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:11:34.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:11:34.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:11:34.145 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:11:34.145 Removing: /var/run/dpdk/spdk3/hugepage_info 01:11:34.145 Removing: /var/run/dpdk/spdk4/config 01:11:34.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:11:34.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:11:34.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:11:34.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:11:34.145 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:11:34.145 Removing: /var/run/dpdk/spdk4/hugepage_info 01:11:34.145 Removing: /dev/shm/nvmf_trace.0 01:11:34.145 Removing: /dev/shm/spdk_tgt_trace.pid73786 01:11:34.404 Removing: /var/run/dpdk/spdk0 01:11:34.404 Removing: /var/run/dpdk/spdk1 01:11:34.404 Removing: /var/run/dpdk/spdk2 01:11:34.404 Removing: /var/run/dpdk/spdk3 01:11:34.404 Removing: /var/run/dpdk/spdk4 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100069 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100110 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100150 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100190 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100344 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100491 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100744 01:11:34.404 Removing: /var/run/dpdk/spdk_pid100861 01:11:34.404 Removing: /var/run/dpdk/spdk_pid101103 01:11:34.404 Removing: /var/run/dpdk/spdk_pid101223 01:11:34.404 Removing: /var/run/dpdk/spdk_pid101352 01:11:34.404 Removing: /var/run/dpdk/spdk_pid101687 01:11:34.404 Removing: /var/run/dpdk/spdk_pid102060 01:11:34.404 Removing: /var/run/dpdk/spdk_pid102062 01:11:34.404 Removing: /var/run/dpdk/spdk_pid104313 01:11:34.404 Removing: /var/run/dpdk/spdk_pid104623 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105119 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105127 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105465 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105479 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105499 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105524 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105530 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105672 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105675 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105782 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105785 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105898 01:11:34.404 Removing: /var/run/dpdk/spdk_pid105901 01:11:34.404 Removing: /var/run/dpdk/spdk_pid106359 01:11:34.404 Removing: /var/run/dpdk/spdk_pid106402 01:11:34.404 Removing: /var/run/dpdk/spdk_pid106553 01:11:34.404 Removing: /var/run/dpdk/spdk_pid106668 01:11:34.404 Removing: /var/run/dpdk/spdk_pid107057 01:11:34.404 Removing: /var/run/dpdk/spdk_pid107302 01:11:34.404 Removing: /var/run/dpdk/spdk_pid107787 01:11:34.404 Removing: /var/run/dpdk/spdk_pid108372 01:11:34.404 Removing: /var/run/dpdk/spdk_pid109676 01:11:34.404 Removing: /var/run/dpdk/spdk_pid110282 01:11:34.404 Removing: /var/run/dpdk/spdk_pid110285 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112200 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112287 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112372 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112461 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112614 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112699 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112788 01:11:34.404 Removing: /var/run/dpdk/spdk_pid112868 01:11:34.404 Removing: /var/run/dpdk/spdk_pid113212 01:11:34.404 Removing: /var/run/dpdk/spdk_pid113907 01:11:34.404 Removing: /var/run/dpdk/spdk_pid115255 01:11:34.404 Removing: /var/run/dpdk/spdk_pid115449 01:11:34.404 Removing: /var/run/dpdk/spdk_pid115739 01:11:34.404 Removing: /var/run/dpdk/spdk_pid116029 01:11:34.663 Removing: /var/run/dpdk/spdk_pid116586 01:11:34.663 Removing: /var/run/dpdk/spdk_pid116592 01:11:34.663 Removing: /var/run/dpdk/spdk_pid116952 01:11:34.663 Removing: /var/run/dpdk/spdk_pid117105 01:11:34.663 Removing: /var/run/dpdk/spdk_pid117262 01:11:34.663 Removing: /var/run/dpdk/spdk_pid117360 01:11:34.663 Removing: /var/run/dpdk/spdk_pid117515 01:11:34.663 Removing: /var/run/dpdk/spdk_pid117624 01:11:34.663 Removing: /var/run/dpdk/spdk_pid118308 01:11:34.663 Removing: /var/run/dpdk/spdk_pid118339 01:11:34.663 Removing: /var/run/dpdk/spdk_pid118374 01:11:34.663 Removing: /var/run/dpdk/spdk_pid118629 01:11:34.663 Removing: /var/run/dpdk/spdk_pid118659 01:11:34.663 Removing: /var/run/dpdk/spdk_pid118695 01:11:34.663 Removing: /var/run/dpdk/spdk_pid119126 01:11:34.663 Removing: /var/run/dpdk/spdk_pid119160 01:11:34.663 Removing: /var/run/dpdk/spdk_pid119617 01:11:34.663 Removing: /var/run/dpdk/spdk_pid119766 01:11:34.663 Removing: /var/run/dpdk/spdk_pid119802 01:11:34.663 Removing: /var/run/dpdk/spdk_pid73641 01:11:34.663 Removing: /var/run/dpdk/spdk_pid73786 01:11:34.663 Removing: /var/run/dpdk/spdk_pid74047 01:11:34.663 Removing: /var/run/dpdk/spdk_pid74134 01:11:34.663 Removing: /var/run/dpdk/spdk_pid74174 01:11:34.663 Removing: /var/run/dpdk/spdk_pid74283 01:11:34.663 Removing: /var/run/dpdk/spdk_pid74313 01:11:34.663 Removing: /var/run/dpdk/spdk_pid74431 01:11:34.664 Removing: /var/run/dpdk/spdk_pid74700 01:11:34.664 Removing: /var/run/dpdk/spdk_pid74865 01:11:34.664 Removing: /var/run/dpdk/spdk_pid74947 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75033 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75123 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75156 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75191 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75253 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75376 01:11:34.664 Removing: /var/run/dpdk/spdk_pid75974 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76038 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76103 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76131 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76206 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76234 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76308 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76336 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76386 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76412 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76458 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76488 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76629 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76670 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76739 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76807 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76833 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76899 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76928 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76958 01:11:34.664 Removing: /var/run/dpdk/spdk_pid76999 01:11:34.664 Removing: /var/run/dpdk/spdk_pid77028 01:11:34.664 Removing: /var/run/dpdk/spdk_pid77063 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77097 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77126 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77161 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77195 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77230 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77264 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77293 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77328 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77362 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77397 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77426 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77468 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77501 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77535 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77571 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77637 01:11:34.923 Removing: /var/run/dpdk/spdk_pid77741 01:11:34.923 Removing: /var/run/dpdk/spdk_pid78152 01:11:34.923 Removing: /var/run/dpdk/spdk_pid84892 01:11:34.923 Removing: /var/run/dpdk/spdk_pid85229 01:11:34.923 Removing: /var/run/dpdk/spdk_pid87665 01:11:34.923 Removing: /var/run/dpdk/spdk_pid88040 01:11:34.923 Removing: /var/run/dpdk/spdk_pid88282 01:11:34.923 Removing: /var/run/dpdk/spdk_pid88322 01:11:34.923 Removing: /var/run/dpdk/spdk_pid88935 01:11:34.923 Removing: /var/run/dpdk/spdk_pid89366 01:11:34.923 Removing: /var/run/dpdk/spdk_pid89416 01:11:34.923 Removing: /var/run/dpdk/spdk_pid89763 01:11:34.923 Removing: /var/run/dpdk/spdk_pid90282 01:11:34.923 Removing: /var/run/dpdk/spdk_pid90705 01:11:34.923 Removing: /var/run/dpdk/spdk_pid91633 01:11:34.923 Removing: /var/run/dpdk/spdk_pid92604 01:11:34.923 Removing: /var/run/dpdk/spdk_pid92720 01:11:34.923 Removing: /var/run/dpdk/spdk_pid92788 01:11:34.923 Removing: /var/run/dpdk/spdk_pid94246 01:11:34.923 Removing: /var/run/dpdk/spdk_pid94470 01:11:34.923 Removing: /var/run/dpdk/spdk_pid99384 01:11:34.923 Removing: /var/run/dpdk/spdk_pid99816 01:11:34.923 Removing: /var/run/dpdk/spdk_pid99917 01:11:34.923 Clean 01:11:34.923 11:06:42 -- common/autotest_common.sh@1451 -- # return 0 01:11:34.923 11:06:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 01:11:34.923 11:06:42 -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:34.923 11:06:42 -- common/autotest_common.sh@10 -- # set +x 01:11:35.183 11:06:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 01:11:35.183 11:06:42 -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:35.183 11:06:42 -- common/autotest_common.sh@10 -- # set +x 01:11:35.183 11:06:42 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:11:35.183 11:06:42 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:11:35.183 11:06:42 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:11:35.183 11:06:42 -- spdk/autotest.sh@391 -- # hash lcov 01:11:35.183 11:06:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 01:11:35.183 11:06:42 -- spdk/autotest.sh@393 -- # hostname 01:11:35.183 11:06:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:11:35.442 geninfo: WARNING: invalid characters removed from testname! 01:12:01.970 11:07:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:12:01.970 11:07:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:12:03.866 11:07:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:12:06.392 11:07:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:12:08.925 11:07:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:12:10.844 11:07:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:12:12.750 11:07:20 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:12:12.750 11:07:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:12.750 11:07:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 01:12:12.750 11:07:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:12.750 11:07:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:12.750 11:07:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.750 11:07:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.750 11:07:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.750 11:07:20 -- paths/export.sh@5 -- $ export PATH 01:12:12.750 11:07:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.750 11:07:20 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:12:12.750 11:07:20 -- common/autobuild_common.sh@447 -- $ date +%s 01:12:12.750 11:07:20 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721646440.XXXXXX 01:12:12.750 11:07:20 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721646440.envgPI 01:12:12.750 11:07:20 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 01:12:12.750 11:07:20 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 01:12:12.750 11:07:20 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 01:12:12.750 11:07:20 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 01:12:12.750 11:07:20 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:12:12.750 11:07:20 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:12:12.750 11:07:20 -- common/autobuild_common.sh@463 -- $ get_config_params 01:12:12.750 11:07:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 01:12:12.750 11:07:20 -- common/autotest_common.sh@10 -- $ set +x 01:12:12.750 11:07:20 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 01:12:12.750 11:07:20 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 01:12:12.750 11:07:20 -- pm/common@17 -- $ local monitor 01:12:12.750 11:07:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:12:12.750 11:07:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:12:12.750 11:07:20 -- pm/common@21 -- $ date +%s 01:12:12.750 11:07:20 -- pm/common@25 -- $ sleep 1 01:12:12.750 11:07:20 -- pm/common@21 -- $ date +%s 01:12:12.750 11:07:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721646440 01:12:12.750 11:07:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721646440 01:12:12.750 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721646440_collect-cpu-load.pm.log 01:12:12.750 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721646440_collect-vmstat.pm.log 01:12:13.687 11:07:21 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 01:12:13.687 11:07:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 01:12:13.687 11:07:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 01:12:13.687 11:07:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 01:12:13.687 11:07:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 01:12:13.687 11:07:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 01:12:13.687 11:07:21 -- spdk/autopackage.sh@19 -- $ timing_finish 01:12:13.687 11:07:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:12:13.687 11:07:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 01:12:13.946 11:07:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:12:13.946 11:07:21 -- spdk/autopackage.sh@20 -- $ exit 0 01:12:13.946 11:07:21 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 01:12:13.946 11:07:21 -- pm/common@29 -- $ signal_monitor_resources TERM 01:12:13.946 11:07:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:12:13.946 11:07:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:12:13.946 11:07:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:12:13.946 11:07:21 -- pm/common@44 -- $ pid=121553 01:12:13.946 11:07:21 -- pm/common@50 -- $ kill -TERM 121553 01:12:13.946 11:07:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:12:13.946 11:07:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:12:13.946 11:07:21 -- pm/common@44 -- $ pid=121555 01:12:13.946 11:07:21 -- pm/common@50 -- $ kill -TERM 121555 01:12:13.946 + [[ -n 5837 ]] 01:12:13.946 + sudo kill 5837 01:12:13.956 [Pipeline] } 01:12:13.976 [Pipeline] // timeout 01:12:13.982 [Pipeline] } 01:12:14.000 [Pipeline] // stage 01:12:14.006 [Pipeline] } 01:12:14.025 [Pipeline] // catchError 01:12:14.036 [Pipeline] stage 01:12:14.039 [Pipeline] { (Stop VM) 01:12:14.054 [Pipeline] sh 01:12:14.336 + vagrant halt 01:12:17.616 ==> default: Halting domain... 01:12:24.210 [Pipeline] sh 01:12:24.487 + vagrant destroy -f 01:12:27.766 ==> default: Removing domain... 01:12:27.779 [Pipeline] sh 01:12:28.060 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 01:12:28.071 [Pipeline] } 01:12:28.090 [Pipeline] // stage 01:12:28.097 [Pipeline] } 01:12:28.116 [Pipeline] // dir 01:12:28.122 [Pipeline] } 01:12:28.141 [Pipeline] // wrap 01:12:28.148 [Pipeline] } 01:12:28.164 [Pipeline] // catchError 01:12:28.173 [Pipeline] stage 01:12:28.175 [Pipeline] { (Epilogue) 01:12:28.190 [Pipeline] sh 01:12:28.471 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:12:33.749 [Pipeline] catchError 01:12:33.752 [Pipeline] { 01:12:33.768 [Pipeline] sh 01:12:34.047 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:12:34.306 Artifacts sizes are good 01:12:34.315 [Pipeline] } 01:12:34.349 [Pipeline] // catchError 01:12:34.356 [Pipeline] archiveArtifacts 01:12:34.361 Archiving artifacts 01:12:34.525 [Pipeline] cleanWs 01:12:34.535 [WS-CLEANUP] Deleting project workspace... 01:12:34.535 [WS-CLEANUP] Deferred wipeout is used... 01:12:34.540 [WS-CLEANUP] done 01:12:34.542 [Pipeline] } 01:12:34.558 [Pipeline] // stage 01:12:34.562 [Pipeline] } 01:12:34.582 [Pipeline] // node 01:12:34.587 [Pipeline] End of Pipeline 01:12:34.620 Finished: SUCCESS